00:00:00.000 Started by upstream project "autotest-nightly" build number 3913 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3289 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.011 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.012 The recommended git tool is: git 00:00:00.013 using credential 00000000-0000-0000-0000-000000000002 00:00:00.016 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.027 Fetching changes from the remote Git repository 00:00:00.028 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.041 Using shallow fetch with depth 1 00:00:00.041 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.041 > git --version # timeout=10 00:00:00.054 > git --version # 'git version 2.39.2' 00:00:00.054 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.070 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.070 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.525 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.538 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.549 Checking out Revision 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 (FETCH_HEAD) 00:00:02.549 > git config core.sparsecheckout # timeout=10 00:00:02.559 > git read-tree -mu HEAD # timeout=10 00:00:02.576 > git checkout -f 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=5 00:00:02.593 Commit message: "doc: add chapter about running CI Vagrant images on dev-systems" 00:00:02.593 > git rev-list --no-walk 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=10 00:00:02.735 [Pipeline] Start of Pipeline 00:00:02.748 [Pipeline] library 00:00:02.749 Loading library shm_lib@master 00:00:02.749 Library shm_lib@master is cached. Copying from home. 00:00:02.764 [Pipeline] node 00:00:02.781 Running on WFP19 in /var/jenkins/workspace/crypto-phy-autotest 00:00:02.783 [Pipeline] { 00:00:02.792 [Pipeline] catchError 00:00:02.794 [Pipeline] { 00:00:02.805 [Pipeline] wrap 00:00:02.811 [Pipeline] { 00:00:02.817 [Pipeline] stage 00:00:02.818 [Pipeline] { (Prologue) 00:00:03.005 [Pipeline] sh 00:00:03.285 + logger -p user.info -t JENKINS-CI 00:00:03.303 [Pipeline] echo 00:00:03.305 Node: WFP19 00:00:03.313 [Pipeline] sh 00:00:03.656 [Pipeline] setCustomBuildProperty 00:00:03.667 [Pipeline] echo 00:00:03.668 Cleanup processes 00:00:03.672 [Pipeline] sh 00:00:03.952 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:03.952 2395055 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:03.963 [Pipeline] sh 00:00:04.242 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:04.242 ++ grep -v 'sudo pgrep' 00:00:04.242 ++ awk '{print $1}' 00:00:04.242 + sudo kill -9 00:00:04.242 + true 00:00:04.254 [Pipeline] cleanWs 00:00:04.263 [WS-CLEANUP] Deleting project workspace... 00:00:04.263 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.268 [WS-CLEANUP] done 00:00:04.272 [Pipeline] setCustomBuildProperty 00:00:04.283 [Pipeline] sh 00:00:04.561 + sudo git config --global --replace-all safe.directory '*' 00:00:04.668 [Pipeline] httpRequest 00:00:04.683 [Pipeline] echo 00:00:04.684 Sorcerer 10.211.164.101 is alive 00:00:04.689 [Pipeline] httpRequest 00:00:04.693 HttpMethod: GET 00:00:04.694 URL: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:04.694 Sending request to url: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:04.697 Response Code: HTTP/1.1 200 OK 00:00:04.697 Success: Status code 200 is in the accepted range: 200,404 00:00:04.697 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:05.178 [Pipeline] sh 00:00:05.460 + tar --no-same-owner -xf jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:05.472 [Pipeline] httpRequest 00:00:05.485 [Pipeline] echo 00:00:05.487 Sorcerer 10.211.164.101 is alive 00:00:05.493 [Pipeline] httpRequest 00:00:05.497 HttpMethod: GET 00:00:05.498 URL: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:05.498 Sending request to url: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:05.508 Response Code: HTTP/1.1 200 OK 00:00:05.509 Success: Status code 200 is in the accepted range: 200,404 00:00:05.509 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:58.975 [Pipeline] sh 00:00:59.260 + tar --no-same-owner -xf spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:01:02.564 [Pipeline] sh 00:01:02.848 + git -C spdk log --oneline -n5 00:01:02.848 f7b31b2b9 log: declare g_deprecation_epoch static 00:01:02.848 21d0c3ad6 trace: declare g_user_thread_index_start, g_ut_array and g_ut_array_mutex static 00:01:02.848 3731556bd lvol: declare g_lvol_if static 00:01:02.848 f8404a2d4 nvme: declare g_current_transport_index and g_spdk_transports static 00:01:02.848 34efb6523 dma: declare g_dma_mutex and g_dma_memory_domains static 00:01:02.865 [Pipeline] } 00:01:02.882 [Pipeline] // stage 00:01:02.892 [Pipeline] stage 00:01:02.895 [Pipeline] { (Prepare) 00:01:02.913 [Pipeline] writeFile 00:01:02.932 [Pipeline] sh 00:01:03.220 + logger -p user.info -t JENKINS-CI 00:01:03.234 [Pipeline] sh 00:01:03.520 + logger -p user.info -t JENKINS-CI 00:01:03.532 [Pipeline] sh 00:01:03.818 + cat autorun-spdk.conf 00:01:03.818 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.818 SPDK_TEST_BLOCKDEV=1 00:01:03.818 SPDK_TEST_ISAL=1 00:01:03.818 SPDK_TEST_CRYPTO=1 00:01:03.818 SPDK_TEST_REDUCE=1 00:01:03.818 SPDK_TEST_VBDEV_COMPRESS=1 00:01:03.818 SPDK_RUN_ASAN=1 00:01:03.818 SPDK_RUN_UBSAN=1 00:01:03.826 RUN_NIGHTLY=1 00:01:03.830 [Pipeline] readFile 00:01:03.856 [Pipeline] withEnv 00:01:03.859 [Pipeline] { 00:01:03.873 [Pipeline] sh 00:01:04.161 + set -ex 00:01:04.161 + [[ -f /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf ]] 00:01:04.161 + source /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:01:04.161 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.161 ++ SPDK_TEST_BLOCKDEV=1 00:01:04.161 ++ SPDK_TEST_ISAL=1 00:01:04.161 ++ SPDK_TEST_CRYPTO=1 00:01:04.161 ++ SPDK_TEST_REDUCE=1 00:01:04.161 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:01:04.161 ++ SPDK_RUN_ASAN=1 00:01:04.161 ++ SPDK_RUN_UBSAN=1 00:01:04.161 ++ RUN_NIGHTLY=1 00:01:04.161 + case $SPDK_TEST_NVMF_NICS in 00:01:04.161 + DRIVERS= 00:01:04.161 + [[ -n '' ]] 00:01:04.161 + exit 0 00:01:04.171 [Pipeline] } 00:01:04.189 [Pipeline] // withEnv 00:01:04.195 [Pipeline] } 00:01:04.212 [Pipeline] // stage 00:01:04.222 [Pipeline] catchError 00:01:04.224 [Pipeline] { 00:01:04.240 [Pipeline] timeout 00:01:04.240 Timeout set to expire in 1 hr 0 min 00:01:04.242 [Pipeline] { 00:01:04.258 [Pipeline] stage 00:01:04.261 [Pipeline] { (Tests) 00:01:04.277 [Pipeline] sh 00:01:04.564 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/crypto-phy-autotest 00:01:04.564 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest 00:01:04.564 + DIR_ROOT=/var/jenkins/workspace/crypto-phy-autotest 00:01:04.564 + [[ -n /var/jenkins/workspace/crypto-phy-autotest ]] 00:01:04.564 + DIR_SPDK=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:01:04.564 + DIR_OUTPUT=/var/jenkins/workspace/crypto-phy-autotest/output 00:01:04.564 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/spdk ]] 00:01:04.564 + [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:01:04.564 + mkdir -p /var/jenkins/workspace/crypto-phy-autotest/output 00:01:04.564 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:01:04.564 + [[ crypto-phy-autotest == pkgdep-* ]] 00:01:04.564 + cd /var/jenkins/workspace/crypto-phy-autotest 00:01:04.564 + source /etc/os-release 00:01:04.564 ++ NAME='Fedora Linux' 00:01:04.564 ++ VERSION='38 (Cloud Edition)' 00:01:04.564 ++ ID=fedora 00:01:04.564 ++ VERSION_ID=38 00:01:04.564 ++ VERSION_CODENAME= 00:01:04.564 ++ PLATFORM_ID=platform:f38 00:01:04.564 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:04.564 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:04.564 ++ LOGO=fedora-logo-icon 00:01:04.564 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:04.564 ++ HOME_URL=https://fedoraproject.org/ 00:01:04.564 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:04.564 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:04.564 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:04.564 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:04.564 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:04.564 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:04.564 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:04.564 ++ SUPPORT_END=2024-05-14 00:01:04.564 ++ VARIANT='Cloud Edition' 00:01:04.564 ++ VARIANT_ID=cloud 00:01:04.564 + uname -a 00:01:04.564 Linux spdk-wfp-19 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:04.564 + sudo /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:01:08.765 Hugepages 00:01:08.765 node hugesize free / total 00:01:08.765 node0 1048576kB 0 / 0 00:01:08.765 node0 2048kB 0 / 0 00:01:08.765 node1 1048576kB 0 / 0 00:01:08.765 node1 2048kB 0 / 0 00:01:08.765 00:01:08.765 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:08.765 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:08.765 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:08.765 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:08.765 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:08.765 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:08.765 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:08.765 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:08.766 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:08.766 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:08.766 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:08.766 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:08.766 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:08.766 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:08.766 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:08.766 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:08.766 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:08.766 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:08.766 + rm -f /tmp/spdk-ld-path 00:01:08.766 + source autorun-spdk.conf 00:01:08.766 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.766 ++ SPDK_TEST_BLOCKDEV=1 00:01:08.766 ++ SPDK_TEST_ISAL=1 00:01:08.766 ++ SPDK_TEST_CRYPTO=1 00:01:08.766 ++ SPDK_TEST_REDUCE=1 00:01:08.766 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:01:08.766 ++ SPDK_RUN_ASAN=1 00:01:08.766 ++ SPDK_RUN_UBSAN=1 00:01:08.766 ++ RUN_NIGHTLY=1 00:01:08.766 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:08.766 + [[ -n '' ]] 00:01:08.766 + sudo git config --global --add safe.directory /var/jenkins/workspace/crypto-phy-autotest/spdk 00:01:08.766 + for M in /var/spdk/build-*-manifest.txt 00:01:08.766 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:08.766 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:01:08.766 + for M in /var/spdk/build-*-manifest.txt 00:01:08.766 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:08.766 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:01:08.766 ++ uname 00:01:08.766 + [[ Linux == \L\i\n\u\x ]] 00:01:08.766 + sudo dmesg -T 00:01:08.766 + sudo dmesg --clear 00:01:08.766 + dmesg_pid=2396136 00:01:08.766 + [[ Fedora Linux == FreeBSD ]] 00:01:08.766 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:08.766 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:08.766 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:08.766 + [[ -x /usr/src/fio-static/fio ]] 00:01:08.766 + export FIO_BIN=/usr/src/fio-static/fio 00:01:08.766 + FIO_BIN=/usr/src/fio-static/fio 00:01:08.766 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\c\r\y\p\t\o\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:08.766 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:08.766 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:08.766 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:08.766 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:08.766 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:08.766 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:08.766 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:08.766 + spdk/autorun.sh /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:01:08.766 + sudo dmesg -Tw 00:01:08.766 Test configuration: 00:01:08.766 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.766 SPDK_TEST_BLOCKDEV=1 00:01:08.766 SPDK_TEST_ISAL=1 00:01:08.766 SPDK_TEST_CRYPTO=1 00:01:08.766 SPDK_TEST_REDUCE=1 00:01:08.766 SPDK_TEST_VBDEV_COMPRESS=1 00:01:08.766 SPDK_RUN_ASAN=1 00:01:08.766 SPDK_RUN_UBSAN=1 00:01:08.766 RUN_NIGHTLY=1 03:56:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:01:08.766 03:56:17 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:08.766 03:56:17 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:08.766 03:56:17 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:08.766 03:56:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.766 03:56:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.766 03:56:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.766 03:56:17 -- paths/export.sh@5 -- $ export PATH 00:01:08.766 03:56:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.766 03:56:17 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:01:08.766 03:56:17 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:08.766 03:56:17 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721699777.XXXXXX 00:01:08.766 03:56:17 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721699777.5AnU49 00:01:08.766 03:56:17 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:08.766 03:56:17 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:08.766 03:56:17 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:01:08.766 03:56:17 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:08.766 03:56:17 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:08.766 03:56:17 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:08.766 03:56:17 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:08.766 03:56:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.766 03:56:17 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:08.766 03:56:17 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:08.766 03:56:17 -- pm/common@17 -- $ local monitor 00:01:08.766 03:56:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:08.766 03:56:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:08.766 03:56:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:08.766 03:56:17 -- pm/common@21 -- $ date +%s 00:01:08.766 03:56:17 -- pm/common@21 -- $ date +%s 00:01:08.766 03:56:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:08.766 03:56:17 -- pm/common@25 -- $ sleep 1 00:01:08.766 03:56:17 -- pm/common@21 -- $ date +%s 00:01:08.766 03:56:17 -- pm/common@21 -- $ date +%s 00:01:08.766 03:56:17 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721699777 00:01:08.766 03:56:17 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721699777 00:01:08.766 03:56:17 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721699777 00:01:08.766 03:56:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721699777 00:01:08.766 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721699777_collect-vmstat.pm.log 00:01:08.766 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721699777_collect-cpu-load.pm.log 00:01:08.766 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721699777_collect-cpu-temp.pm.log 00:01:08.766 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721699777_collect-bmc-pm.bmc.pm.log 00:01:09.705 03:56:18 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:09.705 03:56:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:09.705 03:56:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:09.705 03:56:18 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:01:09.705 03:56:18 -- spdk/autobuild.sh@16 -- $ date -u 00:01:09.705 Tue Jul 23 01:56:18 AM UTC 2024 00:01:09.705 03:56:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:09.705 v24.09-pre-297-gf7b31b2b9 00:01:09.705 03:56:18 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:09.705 03:56:18 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:09.705 03:56:18 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:09.705 03:56:18 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:09.705 03:56:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:09.705 ************************************ 00:01:09.705 START TEST asan 00:01:09.705 ************************************ 00:01:09.705 03:56:18 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:09.705 using asan 00:01:09.705 00:01:09.705 real 0m0.000s 00:01:09.705 user 0m0.000s 00:01:09.705 sys 0m0.000s 00:01:09.705 03:56:18 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:09.705 03:56:18 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:09.705 ************************************ 00:01:09.705 END TEST asan 00:01:09.705 ************************************ 00:01:09.705 03:56:18 -- common/autotest_common.sh@1142 -- $ return 0 00:01:09.705 03:56:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:09.705 03:56:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:09.705 03:56:18 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:09.705 03:56:18 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:09.705 03:56:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:09.706 ************************************ 00:01:09.706 START TEST ubsan 00:01:09.706 ************************************ 00:01:09.706 03:56:18 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:09.706 using ubsan 00:01:09.706 00:01:09.706 real 0m0.000s 00:01:09.706 user 0m0.000s 00:01:09.706 sys 0m0.000s 00:01:09.706 03:56:18 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:09.706 03:56:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:09.706 ************************************ 00:01:09.706 END TEST ubsan 00:01:09.706 ************************************ 00:01:09.706 03:56:18 -- common/autotest_common.sh@1142 -- $ return 0 00:01:09.706 03:56:18 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:09.706 03:56:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:09.706 03:56:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:09.706 03:56:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:09.706 03:56:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:09.706 03:56:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:09.706 03:56:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:09.706 03:56:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:09.706 03:56:18 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:09.966 Using default SPDK env in /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:01:09.966 Using default DPDK in /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:01:10.226 Using 'verbs' RDMA provider 00:01:26.561 Configuring ISA-L (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:41.448 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:41.448 Creating mk/config.mk...done. 00:01:41.448 Creating mk/cc.flags.mk...done. 00:01:41.448 Type 'make' to build. 00:01:41.448 03:56:48 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:41.448 03:56:48 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:41.448 03:56:48 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:41.448 03:56:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.448 ************************************ 00:01:41.448 START TEST make 00:01:41.448 ************************************ 00:01:41.448 03:56:48 make -- common/autotest_common.sh@1123 -- $ make -j112 00:01:41.448 make[1]: Nothing to be done for 'all'. 00:02:13.536 The Meson build system 00:02:13.536 Version: 1.3.1 00:02:13.536 Source dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk 00:02:13.536 Build dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp 00:02:13.536 Build type: native build 00:02:13.536 Program cat found: YES (/usr/bin/cat) 00:02:13.536 Project name: DPDK 00:02:13.536 Project version: 24.03.0 00:02:13.536 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:13.536 C linker for the host machine: cc ld.bfd 2.39-16 00:02:13.536 Host machine cpu family: x86_64 00:02:13.536 Host machine cpu: x86_64 00:02:13.536 Message: ## Building in Developer Mode ## 00:02:13.536 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:13.536 Program check-symbols.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:13.536 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:13.536 Program python3 found: YES (/usr/bin/python3) 00:02:13.536 Program cat found: YES (/usr/bin/cat) 00:02:13.536 Compiler for C supports arguments -march=native: YES 00:02:13.536 Checking for size of "void *" : 8 00:02:13.536 Checking for size of "void *" : 8 (cached) 00:02:13.536 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:13.536 Library m found: YES 00:02:13.536 Library numa found: YES 00:02:13.536 Has header "numaif.h" : YES 00:02:13.536 Library fdt found: NO 00:02:13.536 Library execinfo found: NO 00:02:13.536 Has header "execinfo.h" : YES 00:02:13.536 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:13.536 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:13.536 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:13.536 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:13.536 Run-time dependency openssl found: YES 3.0.9 00:02:13.536 Run-time dependency libpcap found: YES 1.10.4 00:02:13.536 Has header "pcap.h" with dependency libpcap: YES 00:02:13.536 Compiler for C supports arguments -Wcast-qual: YES 00:02:13.536 Compiler for C supports arguments -Wdeprecated: YES 00:02:13.536 Compiler for C supports arguments -Wformat: YES 00:02:13.536 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:13.536 Compiler for C supports arguments -Wformat-security: NO 00:02:13.536 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:13.536 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:13.536 Compiler for C supports arguments -Wnested-externs: YES 00:02:13.536 Compiler for C supports arguments -Wold-style-definition: YES 00:02:13.536 Compiler for C supports arguments -Wpointer-arith: YES 00:02:13.536 Compiler for C supports arguments -Wsign-compare: YES 00:02:13.536 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:13.536 Compiler for C supports arguments -Wundef: YES 00:02:13.536 Compiler for C supports arguments -Wwrite-strings: YES 00:02:13.536 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:13.536 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:13.536 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:13.536 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:13.536 Program objdump found: YES (/usr/bin/objdump) 00:02:13.536 Compiler for C supports arguments -mavx512f: YES 00:02:13.536 Checking if "AVX512 checking" compiles: YES 00:02:13.536 Fetching value of define "__SSE4_2__" : 1 00:02:13.536 Fetching value of define "__AES__" : 1 00:02:13.536 Fetching value of define "__AVX__" : 1 00:02:13.536 Fetching value of define "__AVX2__" : 1 00:02:13.536 Fetching value of define "__AVX512BW__" : 1 00:02:13.536 Fetching value of define "__AVX512CD__" : 1 00:02:13.536 Fetching value of define "__AVX512DQ__" : 1 00:02:13.536 Fetching value of define "__AVX512F__" : 1 00:02:13.536 Fetching value of define "__AVX512VL__" : 1 00:02:13.536 Fetching value of define "__PCLMUL__" : 1 00:02:13.536 Fetching value of define "__RDRND__" : 1 00:02:13.536 Fetching value of define "__RDSEED__" : 1 00:02:13.536 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:13.536 Fetching value of define "__znver1__" : (undefined) 00:02:13.536 Fetching value of define "__znver2__" : (undefined) 00:02:13.536 Fetching value of define "__znver3__" : (undefined) 00:02:13.536 Fetching value of define "__znver4__" : (undefined) 00:02:13.536 Library asan found: YES 00:02:13.536 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:13.536 Message: lib/log: Defining dependency "log" 00:02:13.536 Message: lib/kvargs: Defining dependency "kvargs" 00:02:13.536 Message: lib/telemetry: Defining dependency "telemetry" 00:02:13.536 Library rt found: YES 00:02:13.536 Checking for function "getentropy" : NO 00:02:13.536 Message: lib/eal: Defining dependency "eal" 00:02:13.536 Message: lib/ring: Defining dependency "ring" 00:02:13.536 Message: lib/rcu: Defining dependency "rcu" 00:02:13.536 Message: lib/mempool: Defining dependency "mempool" 00:02:13.536 Message: lib/mbuf: Defining dependency "mbuf" 00:02:13.536 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:13.536 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:13.536 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:13.536 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:13.536 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:13.536 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:13.536 Compiler for C supports arguments -mpclmul: YES 00:02:13.536 Compiler for C supports arguments -maes: YES 00:02:13.536 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:13.536 Compiler for C supports arguments -mavx512bw: YES 00:02:13.536 Compiler for C supports arguments -mavx512dq: YES 00:02:13.536 Compiler for C supports arguments -mavx512vl: YES 00:02:13.536 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:13.536 Compiler for C supports arguments -mavx2: YES 00:02:13.536 Compiler for C supports arguments -mavx: YES 00:02:13.536 Message: lib/net: Defining dependency "net" 00:02:13.536 Message: lib/meter: Defining dependency "meter" 00:02:13.536 Message: lib/ethdev: Defining dependency "ethdev" 00:02:13.536 Message: lib/pci: Defining dependency "pci" 00:02:13.536 Message: lib/cmdline: Defining dependency "cmdline" 00:02:13.536 Message: lib/hash: Defining dependency "hash" 00:02:13.536 Message: lib/timer: Defining dependency "timer" 00:02:13.536 Message: lib/compressdev: Defining dependency "compressdev" 00:02:13.536 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:13.536 Message: lib/dmadev: Defining dependency "dmadev" 00:02:13.536 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:13.536 Message: lib/power: Defining dependency "power" 00:02:13.536 Message: lib/reorder: Defining dependency "reorder" 00:02:13.536 Message: lib/security: Defining dependency "security" 00:02:13.537 Has header "linux/userfaultfd.h" : YES 00:02:13.537 Has header "linux/vduse.h" : YES 00:02:13.537 Message: lib/vhost: Defining dependency "vhost" 00:02:13.537 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:13.537 Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary" 00:02:13.537 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:13.537 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:13.537 Compiler for C supports arguments -std=c11: YES 00:02:13.537 Compiler for C supports arguments -Wno-strict-prototypes: YES 00:02:13.537 Compiler for C supports arguments -D_BSD_SOURCE: YES 00:02:13.537 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 00:02:13.537 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 00:02:13.537 Run-time dependency libmlx5 found: YES 1.24.44.0 00:02:13.537 Run-time dependency libibverbs found: YES 1.14.44.0 00:02:13.537 Library mtcr_ul found: NO 00:02:13.537 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_ESP" with dependencies libmlx5, libibverbs: YES 00:02:13.537 Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 00:02:13.537 Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 00:02:13.537 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 00:02:13.537 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 00:02:13.537 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 00:02:13.537 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 00:02:13.537 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 00:02:13.537 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 00:02:13.537 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 00:02:13.537 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 00:02:13.537 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 00:02:13.537 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 00:02:13.537 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 00:02:13.537 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 00:02:17.732 Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_steering_anchor" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 00:02:17.732 Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 00:02:17.732 Configuring mlx5_autoconf.h using configuration 00:02:17.732 Message: drivers/common/mlx5: Defining dependency "common_mlx5" 00:02:17.732 Run-time dependency libcrypto found: YES 3.0.9 00:02:17.732 Library IPSec_MB found: YES 00:02:17.732 Fetching value of define "IMB_VERSION_STR" : "1.5.0" 00:02:17.732 Message: drivers/common/qat: Defining dependency "common_qat" 00:02:17.732 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:17.732 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:17.732 Library IPSec_MB found: YES 00:02:17.732 Fetching value of define "IMB_VERSION_STR" : "1.5.0" (cached) 00:02:17.732 Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb" 00:02:17.732 Compiler for C supports arguments -std=c11: YES (cached) 00:02:17.732 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:02:17.732 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:02:17.732 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:02:17.732 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:02:17.732 Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5" 00:02:17.732 Run-time dependency libisal found: NO (tried pkgconfig) 00:02:17.732 Library libisal found: NO 00:02:17.732 Message: drivers/compress/isal: Defining dependency "compress_isal" 00:02:17.732 Compiler for C supports arguments -std=c11: YES (cached) 00:02:17.732 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:02:17.732 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:02:17.732 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:02:17.732 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:02:17.732 Message: drivers/compress/mlx5: Defining dependency "compress_mlx5" 00:02:17.732 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:17.732 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:17.732 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:17.732 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:17.732 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:17.732 Program doxygen found: YES (/usr/bin/doxygen) 00:02:17.732 Configuring doxy-api-html.conf using configuration 00:02:17.732 Configuring doxy-api-man.conf using configuration 00:02:17.732 Program mandb found: YES (/usr/bin/mandb) 00:02:17.732 Program sphinx-build found: NO 00:02:17.732 Configuring rte_build_config.h using configuration 00:02:17.732 Message: 00:02:17.732 ================= 00:02:17.732 Applications Enabled 00:02:17.732 ================= 00:02:17.732 00:02:17.732 apps: 00:02:17.732 00:02:17.732 00:02:17.732 Message: 00:02:17.732 ================= 00:02:17.732 Libraries Enabled 00:02:17.732 ================= 00:02:17.732 00:02:17.732 libs: 00:02:17.732 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:17.732 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:17.732 cryptodev, dmadev, power, reorder, security, vhost, 00:02:17.732 00:02:17.732 Message: 00:02:17.732 =============== 00:02:17.732 Drivers Enabled 00:02:17.732 =============== 00:02:17.732 00:02:17.732 common: 00:02:17.732 mlx5, qat, 00:02:17.732 bus: 00:02:17.732 auxiliary, pci, vdev, 00:02:17.732 mempool: 00:02:17.732 ring, 00:02:17.733 dma: 00:02:17.733 00:02:17.733 net: 00:02:17.733 00:02:17.733 crypto: 00:02:17.733 ipsec_mb, mlx5, 00:02:17.733 compress: 00:02:17.733 isal, mlx5, 00:02:17.733 vdpa: 00:02:17.733 00:02:17.733 00:02:17.733 Message: 00:02:17.733 ================= 00:02:17.733 Content Skipped 00:02:17.733 ================= 00:02:17.733 00:02:17.733 apps: 00:02:17.733 dumpcap: explicitly disabled via build config 00:02:17.733 graph: explicitly disabled via build config 00:02:17.733 pdump: explicitly disabled via build config 00:02:17.733 proc-info: explicitly disabled via build config 00:02:17.733 test-acl: explicitly disabled via build config 00:02:17.733 test-bbdev: explicitly disabled via build config 00:02:17.733 test-cmdline: explicitly disabled via build config 00:02:17.733 test-compress-perf: explicitly disabled via build config 00:02:17.733 test-crypto-perf: explicitly disabled via build config 00:02:17.733 test-dma-perf: explicitly disabled via build config 00:02:17.733 test-eventdev: explicitly disabled via build config 00:02:17.733 test-fib: explicitly disabled via build config 00:02:17.733 test-flow-perf: explicitly disabled via build config 00:02:17.733 test-gpudev: explicitly disabled via build config 00:02:17.733 test-mldev: explicitly disabled via build config 00:02:17.733 test-pipeline: explicitly disabled via build config 00:02:17.733 test-pmd: explicitly disabled via build config 00:02:17.733 test-regex: explicitly disabled via build config 00:02:17.733 test-sad: explicitly disabled via build config 00:02:17.733 test-security-perf: explicitly disabled via build config 00:02:17.733 00:02:17.733 libs: 00:02:17.733 argparse: explicitly disabled via build config 00:02:17.733 metrics: explicitly disabled via build config 00:02:17.733 acl: explicitly disabled via build config 00:02:17.733 bbdev: explicitly disabled via build config 00:02:17.733 bitratestats: explicitly disabled via build config 00:02:17.733 bpf: explicitly disabled via build config 00:02:17.733 cfgfile: explicitly disabled via build config 00:02:17.733 distributor: explicitly disabled via build config 00:02:17.733 efd: explicitly disabled via build config 00:02:17.733 eventdev: explicitly disabled via build config 00:02:17.733 dispatcher: explicitly disabled via build config 00:02:17.733 gpudev: explicitly disabled via build config 00:02:17.733 gro: explicitly disabled via build config 00:02:17.733 gso: explicitly disabled via build config 00:02:17.733 ip_frag: explicitly disabled via build config 00:02:17.733 jobstats: explicitly disabled via build config 00:02:17.733 latencystats: explicitly disabled via build config 00:02:17.733 lpm: explicitly disabled via build config 00:02:17.733 member: explicitly disabled via build config 00:02:17.733 pcapng: explicitly disabled via build config 00:02:17.733 rawdev: explicitly disabled via build config 00:02:17.733 regexdev: explicitly disabled via build config 00:02:17.733 mldev: explicitly disabled via build config 00:02:17.733 rib: explicitly disabled via build config 00:02:17.733 sched: explicitly disabled via build config 00:02:17.733 stack: explicitly disabled via build config 00:02:17.733 ipsec: explicitly disabled via build config 00:02:17.733 pdcp: explicitly disabled via build config 00:02:17.733 fib: explicitly disabled via build config 00:02:17.733 port: explicitly disabled via build config 00:02:17.733 pdump: explicitly disabled via build config 00:02:17.733 table: explicitly disabled via build config 00:02:17.733 pipeline: explicitly disabled via build config 00:02:17.733 graph: explicitly disabled via build config 00:02:17.733 node: explicitly disabled via build config 00:02:17.733 00:02:17.733 drivers: 00:02:17.733 common/cpt: not in enabled drivers build config 00:02:17.733 common/dpaax: not in enabled drivers build config 00:02:17.733 common/iavf: not in enabled drivers build config 00:02:17.733 common/idpf: not in enabled drivers build config 00:02:17.733 common/ionic: not in enabled drivers build config 00:02:17.733 common/mvep: not in enabled drivers build config 00:02:17.733 common/octeontx: not in enabled drivers build config 00:02:17.733 bus/cdx: not in enabled drivers build config 00:02:17.733 bus/dpaa: not in enabled drivers build config 00:02:17.733 bus/fslmc: not in enabled drivers build config 00:02:17.733 bus/ifpga: not in enabled drivers build config 00:02:17.733 bus/platform: not in enabled drivers build config 00:02:17.733 bus/uacce: not in enabled drivers build config 00:02:17.733 bus/vmbus: not in enabled drivers build config 00:02:17.733 common/cnxk: not in enabled drivers build config 00:02:17.733 common/nfp: not in enabled drivers build config 00:02:17.733 common/nitrox: not in enabled drivers build config 00:02:17.733 common/sfc_efx: not in enabled drivers build config 00:02:17.733 mempool/bucket: not in enabled drivers build config 00:02:17.733 mempool/cnxk: not in enabled drivers build config 00:02:17.733 mempool/dpaa: not in enabled drivers build config 00:02:17.733 mempool/dpaa2: not in enabled drivers build config 00:02:17.733 mempool/octeontx: not in enabled drivers build config 00:02:17.733 mempool/stack: not in enabled drivers build config 00:02:17.733 dma/cnxk: not in enabled drivers build config 00:02:17.733 dma/dpaa: not in enabled drivers build config 00:02:17.733 dma/dpaa2: not in enabled drivers build config 00:02:17.733 dma/hisilicon: not in enabled drivers build config 00:02:17.733 dma/idxd: not in enabled drivers build config 00:02:17.733 dma/ioat: not in enabled drivers build config 00:02:17.733 dma/skeleton: not in enabled drivers build config 00:02:17.733 net/af_packet: not in enabled drivers build config 00:02:17.733 net/af_xdp: not in enabled drivers build config 00:02:17.733 net/ark: not in enabled drivers build config 00:02:17.733 net/atlantic: not in enabled drivers build config 00:02:17.733 net/avp: not in enabled drivers build config 00:02:17.733 net/axgbe: not in enabled drivers build config 00:02:17.733 net/bnx2x: not in enabled drivers build config 00:02:17.733 net/bnxt: not in enabled drivers build config 00:02:17.733 net/bonding: not in enabled drivers build config 00:02:17.733 net/cnxk: not in enabled drivers build config 00:02:17.733 net/cpfl: not in enabled drivers build config 00:02:17.733 net/cxgbe: not in enabled drivers build config 00:02:17.733 net/dpaa: not in enabled drivers build config 00:02:17.733 net/dpaa2: not in enabled drivers build config 00:02:17.733 net/e1000: not in enabled drivers build config 00:02:17.733 net/ena: not in enabled drivers build config 00:02:17.733 net/enetc: not in enabled drivers build config 00:02:17.733 net/enetfec: not in enabled drivers build config 00:02:17.733 net/enic: not in enabled drivers build config 00:02:17.733 net/failsafe: not in enabled drivers build config 00:02:17.733 net/fm10k: not in enabled drivers build config 00:02:17.733 net/gve: not in enabled drivers build config 00:02:17.733 net/hinic: not in enabled drivers build config 00:02:17.733 net/hns3: not in enabled drivers build config 00:02:17.733 net/i40e: not in enabled drivers build config 00:02:17.733 net/iavf: not in enabled drivers build config 00:02:17.733 net/ice: not in enabled drivers build config 00:02:17.733 net/idpf: not in enabled drivers build config 00:02:17.733 net/igc: not in enabled drivers build config 00:02:17.733 net/ionic: not in enabled drivers build config 00:02:17.733 net/ipn3ke: not in enabled drivers build config 00:02:17.733 net/ixgbe: not in enabled drivers build config 00:02:17.733 net/mana: not in enabled drivers build config 00:02:17.733 net/memif: not in enabled drivers build config 00:02:17.733 net/mlx4: not in enabled drivers build config 00:02:17.733 net/mlx5: not in enabled drivers build config 00:02:17.733 net/mvneta: not in enabled drivers build config 00:02:17.733 net/mvpp2: not in enabled drivers build config 00:02:17.733 net/netvsc: not in enabled drivers build config 00:02:17.733 net/nfb: not in enabled drivers build config 00:02:17.733 net/nfp: not in enabled drivers build config 00:02:17.733 net/ngbe: not in enabled drivers build config 00:02:17.733 net/null: not in enabled drivers build config 00:02:17.733 net/octeontx: not in enabled drivers build config 00:02:17.733 net/octeon_ep: not in enabled drivers build config 00:02:17.733 net/pcap: not in enabled drivers build config 00:02:17.733 net/pfe: not in enabled drivers build config 00:02:17.733 net/qede: not in enabled drivers build config 00:02:17.733 net/ring: not in enabled drivers build config 00:02:17.733 net/sfc: not in enabled drivers build config 00:02:17.733 net/softnic: not in enabled drivers build config 00:02:17.733 net/tap: not in enabled drivers build config 00:02:17.733 net/thunderx: not in enabled drivers build config 00:02:17.733 net/txgbe: not in enabled drivers build config 00:02:17.733 net/vdev_netvsc: not in enabled drivers build config 00:02:17.733 net/vhost: not in enabled drivers build config 00:02:17.733 net/virtio: not in enabled drivers build config 00:02:17.733 net/vmxnet3: not in enabled drivers build config 00:02:17.733 raw/*: missing internal dependency, "rawdev" 00:02:17.733 crypto/armv8: not in enabled drivers build config 00:02:17.733 crypto/bcmfs: not in enabled drivers build config 00:02:17.733 crypto/caam_jr: not in enabled drivers build config 00:02:17.733 crypto/ccp: not in enabled drivers build config 00:02:17.733 crypto/cnxk: not in enabled drivers build config 00:02:17.733 crypto/dpaa_sec: not in enabled drivers build config 00:02:17.733 crypto/dpaa2_sec: not in enabled drivers build config 00:02:17.733 crypto/mvsam: not in enabled drivers build config 00:02:17.733 crypto/nitrox: not in enabled drivers build config 00:02:17.733 crypto/null: not in enabled drivers build config 00:02:17.733 crypto/octeontx: not in enabled drivers build config 00:02:17.733 crypto/openssl: not in enabled drivers build config 00:02:17.733 crypto/scheduler: not in enabled drivers build config 00:02:17.733 crypto/uadk: not in enabled drivers build config 00:02:17.733 crypto/virtio: not in enabled drivers build config 00:02:17.733 compress/nitrox: not in enabled drivers build config 00:02:17.733 compress/octeontx: not in enabled drivers build config 00:02:17.733 compress/zlib: not in enabled drivers build config 00:02:17.734 regex/*: missing internal dependency, "regexdev" 00:02:17.734 ml/*: missing internal dependency, "mldev" 00:02:17.734 vdpa/ifc: not in enabled drivers build config 00:02:17.734 vdpa/mlx5: not in enabled drivers build config 00:02:17.734 vdpa/nfp: not in enabled drivers build config 00:02:17.734 vdpa/sfc: not in enabled drivers build config 00:02:17.734 event/*: missing internal dependency, "eventdev" 00:02:17.734 baseband/*: missing internal dependency, "bbdev" 00:02:17.734 gpu/*: missing internal dependency, "gpudev" 00:02:17.734 00:02:17.734 00:02:17.993 Build targets in project: 115 00:02:17.993 00:02:17.993 DPDK 24.03.0 00:02:17.993 00:02:17.993 User defined options 00:02:17.993 buildtype : debug 00:02:17.993 default_library : shared 00:02:17.993 libdir : lib 00:02:17.993 prefix : /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:02:17.993 b_sanitize : address 00:02:17.993 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -I/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -DNO_COMPAT_IMB_API_053 -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isalbuild -fPIC -Werror 00:02:17.993 c_link_args : -L/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -L/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l/.libs -lisal 00:02:17.993 cpu_instruction_set: native 00:02:17.993 disable_apps : test-acl,graph,test-dma-perf,test-gpudev,test-crypto-perf,test,test-security-perf,test-mldev,proc-info,test-pmd,test-pipeline,test-eventdev,test-cmdline,test-fib,pdump,test-flow-perf,test-bbdev,test-regex,test-sad,dumpcap,test-compress-perf 00:02:17.993 disable_libs : acl,bitratestats,graph,bbdev,jobstats,ipsec,gso,table,rib,node,mldev,sched,ip_frag,cfgfile,port,pcapng,pdcp,argparse,stack,eventdev,regexdev,distributor,gro,efd,pipeline,bpf,dispatcher,lpm,metrics,latencystats,pdump,gpudev,member,fib,rawdev 00:02:17.993 enable_docs : false 00:02:17.993 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,crypto/qat,compress/qat,common/qat,common/mlx5,bus/auxiliary,crypto,crypto/aesni_mb,crypto/mlx5,crypto/ipsec_mb,compress,compress/isal,compress/mlx5 00:02:17.993 enable_kmods : false 00:02:17.993 max_lcores : 128 00:02:17.993 tests : false 00:02:17.993 00:02:17.993 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:18.601 ninja: Entering directory `/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp' 00:02:18.601 [1/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:18.601 [2/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:18.601 [3/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:18.601 [4/378] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:18.601 [5/378] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:18.601 [6/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:18.601 [7/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:18.601 [8/378] Linking static target lib/librte_kvargs.a 00:02:18.601 [9/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:18.601 [10/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:18.601 [11/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:18.601 [12/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:18.601 [13/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:18.601 [14/378] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:18.883 [15/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:18.883 [16/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:18.883 [17/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:18.883 [18/378] Linking static target lib/librte_log.a 00:02:18.883 [19/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:18.883 [20/378] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:18.884 [21/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:18.884 [22/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:18.884 [23/378] Linking static target lib/librte_pci.a 00:02:18.884 [24/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:18.884 [25/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:18.884 [26/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:18.884 [27/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:18.884 [28/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:18.884 [29/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:18.884 [30/378] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:18.884 [31/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:19.147 [32/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:19.147 [33/378] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:19.147 [34/378] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:19.147 [35/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:19.147 [36/378] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:19.147 [37/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:19.147 [38/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:19.147 [39/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:19.147 [40/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:19.147 [41/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:19.147 [42/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:19.147 [43/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:19.147 [44/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:19.147 [45/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:19.147 [46/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:19.147 [47/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:19.147 [48/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:19.147 [49/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:19.147 [50/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:19.147 [51/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:19.411 [52/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:19.411 [53/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:19.411 [54/378] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.411 [55/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:19.411 [56/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:19.411 [57/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:19.411 [58/378] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.411 [59/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:19.411 [60/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:19.411 [61/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:19.411 [62/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:19.411 [63/378] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:19.411 [64/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:19.411 [65/378] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:19.411 [66/378] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:19.411 [67/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:19.411 [68/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:19.411 [69/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:19.411 [70/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:19.411 [71/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:19.411 [72/378] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:19.411 [73/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:19.411 [74/378] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:19.411 [75/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:19.411 [76/378] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:19.411 [77/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:19.411 [78/378] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:19.411 [79/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:19.411 [80/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:19.411 [81/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:19.411 [82/378] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:19.411 [83/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:19.411 [84/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:19.411 [85/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:19.411 [86/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:19.411 [87/378] Linking static target lib/librte_meter.a 00:02:19.411 [88/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:19.411 [89/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:19.411 [90/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:19.411 [91/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:19.411 [92/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:19.411 [93/378] Linking static target lib/librte_telemetry.a 00:02:19.411 [94/378] Linking static target lib/librte_ring.a 00:02:19.411 [95/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:19.411 [96/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:19.411 [97/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:19.411 [98/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:19.411 [99/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:19.411 [100/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:19.411 [101/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:19.411 [102/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o 00:02:19.411 [103/378] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:19.411 [104/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:19.411 [105/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:19.411 [106/378] Linking static target lib/librte_cmdline.a 00:02:19.411 [107/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:19.411 [108/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:19.411 [109/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:19.411 [110/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:19.411 [111/378] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:19.411 [112/378] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:19.411 [113/378] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:19.411 [114/378] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:19.411 [115/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:19.672 [116/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o 00:02:19.672 [117/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:19.672 [118/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:19.672 [119/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:19.672 [120/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:19.672 [121/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:19.672 [122/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:19.672 [123/378] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:19.672 [124/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:19.672 [125/378] Linking static target lib/librte_timer.a 00:02:19.672 [126/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:19.672 [127/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:19.672 [128/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:19.672 [129/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:19.672 [130/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:19.672 [131/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:19.672 [132/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:19.672 [133/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:19.672 [134/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:19.672 [135/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:19.672 [136/378] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:19.672 [137/378] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:19.672 [138/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:19.672 [139/378] Linking static target lib/librte_mempool.a 00:02:19.672 [140/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:19.672 [141/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:19.963 [142/378] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:19.963 [143/378] Linking static target lib/librte_net.a 00:02:19.963 [144/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:19.963 [145/378] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:19.963 [146/378] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.963 [147/378] Linking static target lib/librte_dmadev.a 00:02:19.963 [148/378] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:19.963 [149/378] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:19.963 [150/378] Linking static target lib/librte_rcu.a 00:02:19.963 [151/378] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:19.963 [152/378] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:19.963 [153/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:19.963 [154/378] Linking target lib/librte_log.so.24.1 00:02:19.963 [155/378] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.963 [156/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:19.963 [157/378] Linking static target lib/librte_compressdev.a 00:02:19.963 [158/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o 00:02:19.963 [159/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o 00:02:19.963 [160/378] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.963 [161/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:19.963 [162/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:19.963 [163/378] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:19.963 [164/378] Linking static target lib/librte_eal.a 00:02:19.963 [165/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o 00:02:20.221 [166/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o 00:02:20.221 [167/378] Linking static target drivers/libtmp_rte_bus_auxiliary.a 00:02:20.221 [168/378] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:20.221 [169/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:20.221 [170/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:20.221 [171/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:20.221 [172/378] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:20.221 [173/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:20.221 [174/378] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:20.221 [175/378] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:20.221 [176/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o 00:02:20.221 [177/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o 00:02:20.221 [178/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:20.221 [179/378] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:20.221 [180/378] Linking static target lib/librte_power.a 00:02:20.221 [181/378] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.221 [182/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:20.221 [183/378] Linking target lib/librte_kvargs.so.24.1 00:02:20.221 [184/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o 00:02:20.221 [185/378] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.221 [186/378] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.221 [187/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o 00:02:20.221 [188/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o 00:02:20.221 [189/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:20.221 [190/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen_lce.c.o 00:02:20.221 [191/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o 00:02:20.221 [192/378] Linking target lib/librte_telemetry.so.24.1 00:02:20.221 [193/378] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.221 [194/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o 00:02:20.221 [195/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o 00:02:20.221 [196/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o 00:02:20.221 [197/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o 00:02:20.221 [198/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o 00:02:20.221 [199/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen5.c.o 00:02:20.221 [200/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen5.c.o 00:02:20.221 [201/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o 00:02:20.481 [202/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen5.c.o 00:02:20.481 [203/378] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:20.481 [204/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o 00:02:20.481 [205/378] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:20.481 [206/378] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command 00:02:20.481 [207/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:20.481 [208/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o 00:02:20.481 [209/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o 00:02:20.481 [210/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o 00:02:20.481 [211/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o 00:02:20.481 [212/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:20.481 [213/378] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:20.481 [214/378] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:20.481 [215/378] Compiling C object drivers/librte_bus_auxiliary.so.24.1.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:02:20.481 [216/378] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:02:20.481 [217/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o 00:02:20.481 [218/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o 00:02:20.481 [219/378] Linking static target drivers/librte_bus_auxiliary.a 00:02:20.481 [220/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o 00:02:20.481 [221/378] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:20.481 [222/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o 00:02:20.481 [223/378] Linking static target lib/librte_security.a 00:02:20.481 [224/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o 00:02:20.481 [225/378] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:20.481 [226/378] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.481 [227/378] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.481 [228/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:20.481 [229/378] Linking static target drivers/librte_bus_vdev.a 00:02:20.481 [230/378] Linking static target lib/librte_mbuf.a 00:02:20.481 [231/378] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:20.481 [232/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o 00:02:20.481 [233/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o 00:02:20.481 [234/378] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:20.481 [235/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd_ops.c.o 00:02:20.481 [236/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o 00:02:20.481 [237/378] Linking static target lib/librte_reorder.a 00:02:20.481 [238/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o 00:02:20.481 [239/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o 00:02:20.481 [240/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o 00:02:20.481 [241/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:20.481 [242/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen_lce.c.o 00:02:20.481 [243/378] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.481 [244/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o 00:02:20.481 [245/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:20.740 [246/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o 00:02:20.740 [247/378] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:20.740 [248/378] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:20.740 [249/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_xts.c.o 00:02:20.740 [250/378] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:20.740 [251/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_gcm.c.o 00:02:20.740 [252/378] Linking static target drivers/librte_bus_pci.a 00:02:20.740 [253/378] Linking static target drivers/libtmp_rte_crypto_mlx5.a 00:02:20.740 [254/378] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:20.740 [255/378] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.740 [256/378] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.740 [257/378] Linking static target lib/librte_hash.a 00:02:20.740 [258/378] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.740 [259/378] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:20.740 [260/378] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:20.740 [261/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd.c.o 00:02:20.740 [262/378] Compiling C object drivers/libtmp_rte_compress_mlx5.a.p/compress_mlx5_mlx5_compress.c.o 00:02:20.740 [263/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o 00:02:20.740 [264/378] Linking static target drivers/libtmp_rte_compress_isal.a 00:02:20.740 [265/378] Linking static target drivers/libtmp_rte_compress_mlx5.a 00:02:20.740 [266/378] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.000 [267/378] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.000 [268/378] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command 00:02:21.000 [269/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:21.000 [270/378] Linking static target lib/librte_cryptodev.a 00:02:21.000 [271/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o 00:02:21.000 [272/378] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.000 [273/378] Compiling C object drivers/librte_crypto_mlx5.so.24.1.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:02:21.000 [274/378] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:02:21.000 [275/378] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:21.000 [276/378] Linking static target drivers/librte_crypto_mlx5.a 00:02:21.000 [277/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o 00:02:21.000 [278/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o 00:02:21.000 [279/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o 00:02:21.000 [280/378] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:21.000 [281/378] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:21.000 [282/378] Linking static target drivers/librte_mempool_ring.a 00:02:21.000 [283/378] Generating drivers/rte_compress_mlx5.pmd.c with a custom command 00:02:21.000 [284/378] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.000 [285/378] Generating drivers/rte_compress_isal.pmd.c with a custom command 00:02:21.000 [286/378] Compiling C object drivers/librte_compress_isal.a.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:02:21.000 [287/378] Compiling C object drivers/librte_compress_isal.so.24.1.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:02:21.000 [288/378] Compiling C object drivers/librte_compress_mlx5.so.24.1.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:02:21.000 [289/378] Compiling C object drivers/librte_compress_mlx5.a.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:02:21.259 [290/378] Linking static target drivers/librte_compress_isal.a 00:02:21.259 [291/378] Linking static target drivers/librte_compress_mlx5.a 00:02:21.259 [292/378] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.259 [293/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:21.259 [294/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o 00:02:21.259 [295/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o 00:02:21.259 [296/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o 00:02:21.259 [297/378] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.259 [298/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o 00:02:21.518 [299/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o 00:02:21.518 [300/378] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a 00:02:21.518 [301/378] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.518 [302/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o 00:02:21.518 [303/378] Linking static target drivers/libtmp_rte_common_mlx5.a 00:02:21.777 [304/378] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command 00:02:21.777 [305/378] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.777 [306/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o 00:02:21.777 [307/378] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:02:21.777 [308/378] Compiling C object drivers/librte_crypto_ipsec_mb.so.24.1.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:02:21.777 [309/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o 00:02:21.777 [310/378] Linking static target drivers/librte_crypto_ipsec_mb.a 00:02:22.036 [311/378] Generating drivers/rte_common_mlx5.pmd.c with a custom command 00:02:22.036 [312/378] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:02:22.036 [313/378] Compiling C object drivers/librte_common_mlx5.so.24.1.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:02:22.036 [314/378] Linking static target drivers/librte_common_mlx5.a 00:02:22.973 [315/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:23.232 [316/378] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.665 [317/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o 00:02:24.665 [318/378] Linking static target drivers/libtmp_rte_common_qat.a 00:02:24.665 [319/378] Generating drivers/rte_common_qat.pmd.c with a custom command 00:02:24.925 [320/378] Compiling C object drivers/librte_common_qat.so.24.1.p/meson-generated_.._rte_common_qat.pmd.c.o 00:02:24.925 [321/378] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o 00:02:24.925 [322/378] Linking static target drivers/librte_common_qat.a 00:02:24.925 [323/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:24.925 [324/378] Linking static target lib/librte_ethdev.a 00:02:26.830 [325/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:26.830 [326/378] Linking static target lib/librte_vhost.a 00:02:27.767 [327/378] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.144 [328/378] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.336 [329/378] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.336 [330/378] Linking target lib/librte_eal.so.24.1 00:02:33.336 [331/378] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:33.336 [332/378] Linking target lib/librte_timer.so.24.1 00:02:33.336 [333/378] Linking target lib/librte_meter.so.24.1 00:02:33.336 [334/378] Linking target lib/librte_dmadev.so.24.1 00:02:33.336 [335/378] Linking target lib/librte_ring.so.24.1 00:02:33.336 [336/378] Linking target lib/librte_pci.so.24.1 00:02:33.336 [337/378] Linking target drivers/librte_bus_auxiliary.so.24.1 00:02:33.336 [338/378] Linking target drivers/librte_bus_vdev.so.24.1 00:02:33.336 [339/378] Generating symbol file drivers/librte_bus_auxiliary.so.24.1.p/librte_bus_auxiliary.so.24.1.symbols 00:02:33.336 [340/378] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:33.336 [341/378] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:33.336 [342/378] Generating symbol file drivers/librte_bus_vdev.so.24.1.p/librte_bus_vdev.so.24.1.symbols 00:02:33.336 [343/378] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:33.336 [344/378] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:33.336 [345/378] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:33.594 [346/378] Linking target lib/librte_rcu.so.24.1 00:02:33.594 [347/378] Linking target lib/librte_mempool.so.24.1 00:02:33.594 [348/378] Linking target drivers/librte_bus_pci.so.24.1 00:02:33.594 [349/378] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:33.594 [350/378] Generating symbol file drivers/librte_bus_pci.so.24.1.p/librte_bus_pci.so.24.1.symbols 00:02:33.594 [351/378] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:33.853 [352/378] Linking target lib/librte_mbuf.so.24.1 00:02:33.853 [353/378] Linking target drivers/librte_mempool_ring.so.24.1 00:02:33.853 [354/378] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:33.853 [355/378] Linking target lib/librte_cryptodev.so.24.1 00:02:33.853 [356/378] Linking target lib/librte_compressdev.so.24.1 00:02:33.853 [357/378] Linking target lib/librte_reorder.so.24.1 00:02:33.853 [358/378] Linking target lib/librte_net.so.24.1 00:02:34.112 [359/378] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:34.112 [360/378] Generating symbol file lib/librte_compressdev.so.24.1.p/librte_compressdev.so.24.1.symbols 00:02:34.112 [361/378] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:34.112 [362/378] Linking target lib/librte_hash.so.24.1 00:02:34.112 [363/378] Linking target lib/librte_security.so.24.1 00:02:34.112 [364/378] Linking target lib/librte_cmdline.so.24.1 00:02:34.112 [365/378] Linking target drivers/librte_compress_isal.so.24.1 00:02:34.371 [366/378] Generating symbol file lib/librte_security.so.24.1.p/librte_security.so.24.1.symbols 00:02:34.371 [367/378] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:34.371 [368/378] Linking target drivers/librte_common_mlx5.so.24.1 00:02:34.371 [369/378] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.630 [370/378] Generating symbol file drivers/librte_common_mlx5.so.24.1.p/librte_common_mlx5.so.24.1.symbols 00:02:34.630 [371/378] Linking target lib/librte_ethdev.so.24.1 00:02:34.630 [372/378] Linking target drivers/librte_crypto_ipsec_mb.so.24.1 00:02:34.630 [373/378] Linking target drivers/librte_common_qat.so.24.1 00:02:34.630 [374/378] Linking target drivers/librte_compress_mlx5.so.24.1 00:02:34.630 [375/378] Linking target drivers/librte_crypto_mlx5.so.24.1 00:02:34.630 [376/378] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:34.890 [377/378] Linking target lib/librte_power.so.24.1 00:02:34.890 [378/378] Linking target lib/librte_vhost.so.24.1 00:02:34.890 INFO: autodetecting backend as ninja 00:02:34.890 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:36.268 CC lib/ut_mock/mock.o 00:02:36.268 CC lib/log/log.o 00:02:36.268 CC lib/log/log_flags.o 00:02:36.268 CC lib/log/log_deprecated.o 00:02:36.268 CC lib/ut/ut.o 00:02:36.268 LIB libspdk_ut_mock.a 00:02:36.268 LIB libspdk_log.a 00:02:36.268 SO libspdk_ut_mock.so.6.0 00:02:36.268 LIB libspdk_ut.a 00:02:36.268 SO libspdk_log.so.7.0 00:02:36.268 SO libspdk_ut.so.2.0 00:02:36.268 SYMLINK libspdk_ut_mock.so 00:02:36.268 SYMLINK libspdk_ut.so 00:02:36.527 SYMLINK libspdk_log.so 00:02:36.786 CC lib/dma/dma.o 00:02:36.786 CC lib/util/base64.o 00:02:36.786 CC lib/util/crc16.o 00:02:36.786 CC lib/util/bit_array.o 00:02:36.786 CC lib/util/cpuset.o 00:02:36.786 CC lib/util/crc32.o 00:02:36.786 CC lib/util/crc32c.o 00:02:36.786 CC lib/util/dif.o 00:02:36.786 CC lib/util/crc64.o 00:02:36.786 CC lib/util/crc32_ieee.o 00:02:36.786 CC lib/util/file.o 00:02:36.786 CC lib/util/fd.o 00:02:36.786 CC lib/util/hexlify.o 00:02:36.786 CC lib/util/fd_group.o 00:02:36.786 CXX lib/trace_parser/trace.o 00:02:36.786 CC lib/util/iov.o 00:02:36.786 CC lib/util/math.o 00:02:36.786 CC lib/util/net.o 00:02:36.786 CC lib/util/pipe.o 00:02:36.786 CC lib/util/strerror_tls.o 00:02:36.786 CC lib/util/string.o 00:02:36.786 CC lib/ioat/ioat.o 00:02:36.786 CC lib/util/uuid.o 00:02:36.786 CC lib/util/xor.o 00:02:36.786 CC lib/util/zipf.o 00:02:37.046 CC lib/vfio_user/host/vfio_user.o 00:02:37.046 CC lib/vfio_user/host/vfio_user_pci.o 00:02:37.046 LIB libspdk_dma.a 00:02:37.046 SO libspdk_dma.so.4.0 00:02:37.046 SYMLINK libspdk_dma.so 00:02:37.046 LIB libspdk_ioat.a 00:02:37.305 SO libspdk_ioat.so.7.0 00:02:37.305 LIB libspdk_vfio_user.a 00:02:37.305 SYMLINK libspdk_ioat.so 00:02:37.305 SO libspdk_vfio_user.so.5.0 00:02:37.305 SYMLINK libspdk_vfio_user.so 00:02:37.564 LIB libspdk_util.a 00:02:37.564 SO libspdk_util.so.10.0 00:02:37.825 SYMLINK libspdk_util.so 00:02:37.825 LIB libspdk_trace_parser.a 00:02:37.825 SO libspdk_trace_parser.so.5.0 00:02:37.825 SYMLINK libspdk_trace_parser.so 00:02:38.084 CC lib/vmd/vmd.o 00:02:38.084 CC lib/vmd/led.o 00:02:38.084 CC lib/idxd/idxd.o 00:02:38.084 CC lib/idxd/idxd_user.o 00:02:38.084 CC lib/idxd/idxd_kernel.o 00:02:38.084 CC lib/rdma_utils/rdma_utils.o 00:02:38.084 CC lib/rdma_provider/common.o 00:02:38.084 CC lib/env_dpdk/env.o 00:02:38.084 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:38.084 CC lib/env_dpdk/memory.o 00:02:38.084 CC lib/env_dpdk/pci.o 00:02:38.084 CC lib/env_dpdk/init.o 00:02:38.084 CC lib/env_dpdk/threads.o 00:02:38.084 CC lib/env_dpdk/pci_ioat.o 00:02:38.084 CC lib/json/json_parse.o 00:02:38.084 CC lib/env_dpdk/pci_virtio.o 00:02:38.084 CC lib/json/json_util.o 00:02:38.084 CC lib/env_dpdk/pci_vmd.o 00:02:38.084 CC lib/env_dpdk/pci_idxd.o 00:02:38.084 CC lib/json/json_write.o 00:02:38.084 CC lib/conf/conf.o 00:02:38.084 CC lib/env_dpdk/pci_event.o 00:02:38.084 CC lib/env_dpdk/sigbus_handler.o 00:02:38.084 CC lib/env_dpdk/pci_dpdk.o 00:02:38.084 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:38.084 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:38.084 CC lib/reduce/reduce.o 00:02:38.343 LIB libspdk_rdma_provider.a 00:02:38.343 SO libspdk_rdma_provider.so.6.0 00:02:38.343 LIB libspdk_conf.a 00:02:38.343 LIB libspdk_rdma_utils.a 00:02:38.343 SO libspdk_conf.so.6.0 00:02:38.343 SYMLINK libspdk_rdma_provider.so 00:02:38.343 SO libspdk_rdma_utils.so.1.0 00:02:38.343 LIB libspdk_json.a 00:02:38.602 SYMLINK libspdk_conf.so 00:02:38.602 SO libspdk_json.so.6.0 00:02:38.602 SYMLINK libspdk_rdma_utils.so 00:02:38.602 SYMLINK libspdk_json.so 00:02:38.862 LIB libspdk_idxd.a 00:02:38.862 SO libspdk_idxd.so.12.0 00:02:38.862 LIB libspdk_vmd.a 00:02:38.862 SO libspdk_vmd.so.6.0 00:02:38.862 SYMLINK libspdk_idxd.so 00:02:38.862 LIB libspdk_reduce.a 00:02:38.862 CC lib/jsonrpc/jsonrpc_server.o 00:02:38.862 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:38.862 CC lib/jsonrpc/jsonrpc_client.o 00:02:38.862 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:38.862 SO libspdk_reduce.so.6.1 00:02:39.121 SYMLINK libspdk_vmd.so 00:02:39.121 SYMLINK libspdk_reduce.so 00:02:39.381 LIB libspdk_jsonrpc.a 00:02:39.640 SO libspdk_jsonrpc.so.6.0 00:02:39.640 SYMLINK libspdk_jsonrpc.so 00:02:39.640 LIB libspdk_env_dpdk.a 00:02:39.902 SO libspdk_env_dpdk.so.15.0 00:02:39.902 CC lib/rpc/rpc.o 00:02:39.902 SYMLINK libspdk_env_dpdk.so 00:02:40.168 LIB libspdk_rpc.a 00:02:40.168 SO libspdk_rpc.so.6.0 00:02:40.168 SYMLINK libspdk_rpc.so 00:02:40.736 CC lib/notify/notify.o 00:02:40.737 CC lib/notify/notify_rpc.o 00:02:40.737 CC lib/trace/trace.o 00:02:40.737 CC lib/trace/trace_rpc.o 00:02:40.737 CC lib/trace/trace_flags.o 00:02:40.737 CC lib/keyring/keyring.o 00:02:40.737 CC lib/keyring/keyring_rpc.o 00:02:40.737 LIB libspdk_notify.a 00:02:40.737 SO libspdk_notify.so.6.0 00:02:40.996 LIB libspdk_keyring.a 00:02:40.996 SYMLINK libspdk_notify.so 00:02:40.996 LIB libspdk_trace.a 00:02:40.996 SO libspdk_keyring.so.1.0 00:02:40.996 SO libspdk_trace.so.10.0 00:02:40.996 SYMLINK libspdk_keyring.so 00:02:40.996 SYMLINK libspdk_trace.so 00:02:41.254 CC lib/thread/iobuf.o 00:02:41.254 CC lib/thread/thread.o 00:02:41.513 CC lib/sock/sock.o 00:02:41.513 CC lib/sock/sock_rpc.o 00:02:41.773 LIB libspdk_sock.a 00:02:42.032 SO libspdk_sock.so.10.0 00:02:42.032 SYMLINK libspdk_sock.so 00:02:42.291 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:42.291 CC lib/nvme/nvme_ctrlr.o 00:02:42.291 CC lib/nvme/nvme_fabric.o 00:02:42.291 CC lib/nvme/nvme_ns_cmd.o 00:02:42.291 CC lib/nvme/nvme_ns.o 00:02:42.291 CC lib/nvme/nvme_pcie_common.o 00:02:42.291 CC lib/nvme/nvme_pcie.o 00:02:42.291 CC lib/nvme/nvme_qpair.o 00:02:42.291 CC lib/nvme/nvme.o 00:02:42.291 CC lib/nvme/nvme_quirks.o 00:02:42.291 CC lib/nvme/nvme_transport.o 00:02:42.291 CC lib/nvme/nvme_discovery.o 00:02:42.291 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:42.291 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:42.291 CC lib/nvme/nvme_tcp.o 00:02:42.291 CC lib/nvme/nvme_opal.o 00:02:42.291 CC lib/nvme/nvme_io_msg.o 00:02:42.291 CC lib/nvme/nvme_poll_group.o 00:02:42.291 CC lib/nvme/nvme_auth.o 00:02:42.291 CC lib/nvme/nvme_zns.o 00:02:42.291 CC lib/nvme/nvme_stubs.o 00:02:42.291 CC lib/nvme/nvme_cuse.o 00:02:42.291 CC lib/nvme/nvme_rdma.o 00:02:43.228 LIB libspdk_thread.a 00:02:43.228 SO libspdk_thread.so.10.1 00:02:43.228 SYMLINK libspdk_thread.so 00:02:43.797 CC lib/blob/blobstore.o 00:02:43.797 CC lib/init/json_config.o 00:02:43.797 CC lib/init/subsystem_rpc.o 00:02:43.797 CC lib/init/subsystem.o 00:02:43.797 CC lib/blob/request.o 00:02:43.797 CC lib/init/rpc.o 00:02:43.797 CC lib/blob/zeroes.o 00:02:43.797 CC lib/blob/blob_bs_dev.o 00:02:43.797 CC lib/accel/accel.o 00:02:43.797 CC lib/accel/accel_rpc.o 00:02:43.797 CC lib/accel/accel_sw.o 00:02:43.797 CC lib/virtio/virtio.o 00:02:43.797 CC lib/virtio/virtio_vfio_user.o 00:02:43.797 CC lib/virtio/virtio_vhost_user.o 00:02:43.797 CC lib/virtio/virtio_pci.o 00:02:44.057 LIB libspdk_init.a 00:02:44.057 SO libspdk_init.so.5.0 00:02:44.057 LIB libspdk_virtio.a 00:02:44.057 SYMLINK libspdk_init.so 00:02:44.057 SO libspdk_virtio.so.7.0 00:02:44.317 SYMLINK libspdk_virtio.so 00:02:44.317 CC lib/event/app.o 00:02:44.317 CC lib/event/log_rpc.o 00:02:44.317 CC lib/event/reactor.o 00:02:44.317 CC lib/event/app_rpc.o 00:02:44.317 CC lib/event/scheduler_static.o 00:02:44.884 LIB libspdk_accel.a 00:02:44.884 SO libspdk_accel.so.16.0 00:02:44.884 LIB libspdk_event.a 00:02:44.884 SYMLINK libspdk_accel.so 00:02:45.143 SO libspdk_event.so.14.0 00:02:45.143 SYMLINK libspdk_event.so 00:02:45.403 CC lib/bdev/bdev.o 00:02:45.403 CC lib/bdev/bdev_rpc.o 00:02:45.403 CC lib/bdev/bdev_zone.o 00:02:45.403 CC lib/bdev/part.o 00:02:45.403 CC lib/bdev/scsi_nvme.o 00:02:46.342 LIB libspdk_nvme.a 00:02:46.602 SO libspdk_nvme.so.13.1 00:02:46.602 LIB libspdk_blob.a 00:02:46.602 SO libspdk_blob.so.11.0 00:02:46.862 SYMLINK libspdk_blob.so 00:02:46.862 SYMLINK libspdk_nvme.so 00:02:47.122 CC lib/blobfs/blobfs.o 00:02:47.122 CC lib/blobfs/tree.o 00:02:47.122 CC lib/lvol/lvol.o 00:02:48.061 LIB libspdk_lvol.a 00:02:48.061 SO libspdk_lvol.so.10.0 00:02:48.061 SYMLINK libspdk_lvol.so 00:02:48.061 LIB libspdk_blobfs.a 00:02:48.061 SO libspdk_blobfs.so.10.0 00:02:48.321 SYMLINK libspdk_blobfs.so 00:02:48.321 LIB libspdk_bdev.a 00:02:48.580 SO libspdk_bdev.so.16.0 00:02:48.580 SYMLINK libspdk_bdev.so 00:02:49.150 CC lib/scsi/dev.o 00:02:49.150 CC lib/scsi/lun.o 00:02:49.150 CC lib/scsi/port.o 00:02:49.150 CC lib/scsi/scsi_pr.o 00:02:49.150 CC lib/scsi/scsi.o 00:02:49.150 CC lib/scsi/scsi_bdev.o 00:02:49.150 CC lib/scsi/scsi_rpc.o 00:02:49.150 CC lib/scsi/task.o 00:02:49.150 CC lib/ublk/ublk.o 00:02:49.150 CC lib/ublk/ublk_rpc.o 00:02:49.150 CC lib/nvmf/ctrlr.o 00:02:49.150 CC lib/nbd/nbd.o 00:02:49.150 CC lib/nvmf/ctrlr_discovery.o 00:02:49.150 CC lib/nbd/nbd_rpc.o 00:02:49.150 CC lib/nvmf/subsystem.o 00:02:49.150 CC lib/nvmf/ctrlr_bdev.o 00:02:49.150 CC lib/nvmf/nvmf.o 00:02:49.150 CC lib/nvmf/nvmf_rpc.o 00:02:49.150 CC lib/nvmf/tcp.o 00:02:49.151 CC lib/nvmf/transport.o 00:02:49.151 CC lib/nvmf/stubs.o 00:02:49.151 CC lib/nvmf/mdns_server.o 00:02:49.151 CC lib/nvmf/auth.o 00:02:49.151 CC lib/ftl/ftl_core.o 00:02:49.151 CC lib/nvmf/rdma.o 00:02:49.151 CC lib/ftl/ftl_init.o 00:02:49.151 CC lib/ftl/ftl_layout.o 00:02:49.151 CC lib/ftl/ftl_debug.o 00:02:49.151 CC lib/ftl/ftl_io.o 00:02:49.151 CC lib/ftl/ftl_sb.o 00:02:49.151 CC lib/ftl/ftl_l2p.o 00:02:49.151 CC lib/ftl/ftl_l2p_flat.o 00:02:49.151 CC lib/ftl/ftl_nv_cache.o 00:02:49.151 CC lib/ftl/ftl_band.o 00:02:49.151 CC lib/ftl/ftl_band_ops.o 00:02:49.151 CC lib/ftl/ftl_writer.o 00:02:49.151 CC lib/ftl/ftl_rq.o 00:02:49.151 CC lib/ftl/ftl_l2p_cache.o 00:02:49.151 CC lib/ftl/ftl_reloc.o 00:02:49.151 CC lib/ftl/ftl_p2l.o 00:02:49.151 CC lib/ftl/mngt/ftl_mngt.o 00:02:49.151 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:49.151 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:49.151 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:49.151 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:49.151 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:49.151 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:49.151 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:49.151 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:49.151 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:49.151 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:49.151 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:49.151 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:49.151 CC lib/ftl/utils/ftl_conf.o 00:02:49.151 CC lib/ftl/utils/ftl_md.o 00:02:49.151 CC lib/ftl/utils/ftl_mempool.o 00:02:49.151 CC lib/ftl/utils/ftl_bitmap.o 00:02:49.151 CC lib/ftl/utils/ftl_property.o 00:02:49.151 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:49.151 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:49.151 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:49.151 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:49.151 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:49.151 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:49.151 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:49.151 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:49.151 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:49.151 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:49.151 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:49.151 CC lib/ftl/base/ftl_base_dev.o 00:02:49.151 CC lib/ftl/ftl_trace.o 00:02:49.151 CC lib/ftl/base/ftl_base_bdev.o 00:02:49.792 LIB libspdk_scsi.a 00:02:49.792 LIB libspdk_nbd.a 00:02:49.792 SO libspdk_scsi.so.9.0 00:02:49.792 SO libspdk_nbd.so.7.0 00:02:49.792 SYMLINK libspdk_nbd.so 00:02:49.792 LIB libspdk_ublk.a 00:02:50.051 SYMLINK libspdk_scsi.so 00:02:50.051 SO libspdk_ublk.so.3.0 00:02:50.051 SYMLINK libspdk_ublk.so 00:02:50.311 CC lib/vhost/vhost.o 00:02:50.311 CC lib/vhost/vhost_rpc.o 00:02:50.311 CC lib/vhost/vhost_scsi.o 00:02:50.311 CC lib/vhost/vhost_blk.o 00:02:50.311 CC lib/vhost/rte_vhost_user.o 00:02:50.311 CC lib/iscsi/conn.o 00:02:50.311 CC lib/iscsi/iscsi.o 00:02:50.311 CC lib/iscsi/init_grp.o 00:02:50.311 CC lib/iscsi/md5.o 00:02:50.311 CC lib/iscsi/param.o 00:02:50.311 CC lib/iscsi/portal_grp.o 00:02:50.311 CC lib/iscsi/iscsi_rpc.o 00:02:50.311 CC lib/iscsi/tgt_node.o 00:02:50.311 CC lib/iscsi/iscsi_subsystem.o 00:02:50.311 CC lib/iscsi/task.o 00:02:50.570 LIB libspdk_ftl.a 00:02:50.829 SO libspdk_ftl.so.9.0 00:02:51.088 SYMLINK libspdk_ftl.so 00:02:52.025 LIB libspdk_vhost.a 00:02:52.025 LIB libspdk_nvmf.a 00:02:52.025 SO libspdk_vhost.so.8.0 00:02:52.025 LIB libspdk_iscsi.a 00:02:52.025 SO libspdk_nvmf.so.19.0 00:02:52.025 SYMLINK libspdk_vhost.so 00:02:52.025 SO libspdk_iscsi.so.8.0 00:02:52.284 SYMLINK libspdk_iscsi.so 00:02:52.284 SYMLINK libspdk_nvmf.so 00:02:52.852 CC module/env_dpdk/env_dpdk_rpc.o 00:02:53.111 CC module/accel/error/accel_error.o 00:02:53.111 CC module/accel/error/accel_error_rpc.o 00:02:53.111 CC module/accel/iaa/accel_iaa.o 00:02:53.111 CC module/accel/iaa/accel_iaa_rpc.o 00:02:53.111 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev_rpc.o 00:02:53.111 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev.o 00:02:53.111 LIB libspdk_env_dpdk_rpc.a 00:02:53.111 CC module/accel/ioat/accel_ioat_rpc.o 00:02:53.111 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:53.111 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:53.111 CC module/accel/ioat/accel_ioat.o 00:02:53.111 CC module/scheduler/gscheduler/gscheduler.o 00:02:53.111 CC module/accel/dsa/accel_dsa.o 00:02:53.111 CC module/keyring/linux/keyring.o 00:02:53.111 CC module/keyring/linux/keyring_rpc.o 00:02:53.111 CC module/accel/dsa/accel_dsa_rpc.o 00:02:53.111 CC module/keyring/file/keyring.o 00:02:53.111 CC module/keyring/file/keyring_rpc.o 00:02:53.111 CC module/blob/bdev/blob_bdev.o 00:02:53.111 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o 00:02:53.111 CC module/sock/posix/posix.o 00:02:53.111 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o 00:02:53.111 SO libspdk_env_dpdk_rpc.so.6.0 00:02:53.111 SYMLINK libspdk_env_dpdk_rpc.so 00:02:53.370 LIB libspdk_keyring_file.a 00:02:53.370 LIB libspdk_keyring_linux.a 00:02:53.370 LIB libspdk_scheduler_dpdk_governor.a 00:02:53.370 LIB libspdk_scheduler_gscheduler.a 00:02:53.370 LIB libspdk_accel_error.a 00:02:53.370 LIB libspdk_accel_ioat.a 00:02:53.370 SO libspdk_keyring_file.so.1.0 00:02:53.370 LIB libspdk_accel_iaa.a 00:02:53.370 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:53.370 SO libspdk_keyring_linux.so.1.0 00:02:53.370 SO libspdk_scheduler_gscheduler.so.4.0 00:02:53.370 SO libspdk_accel_error.so.2.0 00:02:53.370 SO libspdk_accel_ioat.so.6.0 00:02:53.370 SO libspdk_accel_iaa.so.3.0 00:02:53.370 SYMLINK libspdk_scheduler_gscheduler.so 00:02:53.370 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:53.370 SYMLINK libspdk_keyring_file.so 00:02:53.370 LIB libspdk_accel_dsa.a 00:02:53.370 SYMLINK libspdk_keyring_linux.so 00:02:53.370 LIB libspdk_blob_bdev.a 00:02:53.370 SYMLINK libspdk_accel_iaa.so 00:02:53.370 SYMLINK libspdk_accel_ioat.so 00:02:53.370 SYMLINK libspdk_accel_error.so 00:02:53.370 SO libspdk_accel_dsa.so.5.0 00:02:53.370 SO libspdk_blob_bdev.so.11.0 00:02:53.628 SYMLINK libspdk_accel_dsa.so 00:02:53.628 SYMLINK libspdk_blob_bdev.so 00:02:53.628 LIB libspdk_scheduler_dynamic.a 00:02:53.628 SO libspdk_scheduler_dynamic.so.4.0 00:02:53.628 SYMLINK libspdk_scheduler_dynamic.so 00:02:53.887 LIB libspdk_sock_posix.a 00:02:53.887 SO libspdk_sock_posix.so.6.0 00:02:53.887 SYMLINK libspdk_sock_posix.so 00:02:54.145 CC module/bdev/compress/vbdev_compress_rpc.o 00:02:54.145 CC module/bdev/raid/bdev_raid.o 00:02:54.145 CC module/bdev/raid/bdev_raid_rpc.o 00:02:54.145 CC module/bdev/compress/vbdev_compress.o 00:02:54.145 CC module/bdev/raid/bdev_raid_sb.o 00:02:54.145 CC module/bdev/raid/raid1.o 00:02:54.145 CC module/bdev/raid/raid0.o 00:02:54.145 CC module/bdev/raid/concat.o 00:02:54.145 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:54.145 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:54.145 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:54.145 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:54.145 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:54.145 CC module/bdev/null/bdev_null.o 00:02:54.145 CC module/bdev/nvme/bdev_nvme.o 00:02:54.145 CC module/bdev/null/bdev_null_rpc.o 00:02:54.145 CC module/bdev/nvme/nvme_rpc.o 00:02:54.145 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:54.145 CC module/bdev/nvme/vbdev_opal.o 00:02:54.145 CC module/bdev/nvme/bdev_mdns_client.o 00:02:54.145 CC module/bdev/gpt/gpt.o 00:02:54.145 CC module/bdev/error/vbdev_error.o 00:02:54.145 CC module/bdev/iscsi/bdev_iscsi.o 00:02:54.145 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:54.145 CC module/bdev/error/vbdev_error_rpc.o 00:02:54.145 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:54.145 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:54.145 CC module/bdev/gpt/vbdev_gpt.o 00:02:54.145 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:54.145 CC module/bdev/malloc/bdev_malloc.o 00:02:54.145 CC module/bdev/aio/bdev_aio_rpc.o 00:02:54.145 CC module/bdev/aio/bdev_aio.o 00:02:54.145 CC module/blobfs/bdev/blobfs_bdev.o 00:02:54.145 CC module/bdev/crypto/vbdev_crypto.o 00:02:54.145 CC module/bdev/passthru/vbdev_passthru.o 00:02:54.145 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:54.145 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:54.145 CC module/bdev/crypto/vbdev_crypto_rpc.o 00:02:54.145 CC module/bdev/split/vbdev_split.o 00:02:54.145 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:54.145 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:54.145 CC module/bdev/lvol/vbdev_lvol.o 00:02:54.145 CC module/bdev/ftl/bdev_ftl.o 00:02:54.145 CC module/bdev/delay/vbdev_delay.o 00:02:54.145 CC module/bdev/split/vbdev_split_rpc.o 00:02:54.145 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:54.403 LIB libspdk_accel_dpdk_compressdev.a 00:02:54.403 SO libspdk_accel_dpdk_compressdev.so.3.0 00:02:54.403 LIB libspdk_blobfs_bdev.a 00:02:54.403 SO libspdk_blobfs_bdev.so.6.0 00:02:54.403 SYMLINK libspdk_accel_dpdk_compressdev.so 00:02:54.403 LIB libspdk_bdev_split.a 00:02:54.404 LIB libspdk_bdev_ftl.a 00:02:54.404 SO libspdk_bdev_split.so.6.0 00:02:54.404 LIB libspdk_bdev_null.a 00:02:54.404 SYMLINK libspdk_blobfs_bdev.so 00:02:54.404 LIB libspdk_bdev_gpt.a 00:02:54.404 LIB libspdk_bdev_error.a 00:02:54.404 SO libspdk_bdev_ftl.so.6.0 00:02:54.404 SO libspdk_bdev_null.so.6.0 00:02:54.404 SO libspdk_bdev_gpt.so.6.0 00:02:54.404 LIB libspdk_bdev_passthru.a 00:02:54.404 SO libspdk_bdev_error.so.6.0 00:02:54.673 SYMLINK libspdk_bdev_split.so 00:02:54.673 LIB libspdk_bdev_aio.a 00:02:54.673 LIB libspdk_bdev_zone_block.a 00:02:54.673 LIB libspdk_bdev_crypto.a 00:02:54.673 SO libspdk_bdev_passthru.so.6.0 00:02:54.673 SYMLINK libspdk_bdev_null.so 00:02:54.673 SO libspdk_bdev_aio.so.6.0 00:02:54.673 SYMLINK libspdk_bdev_gpt.so 00:02:54.673 SYMLINK libspdk_bdev_ftl.so 00:02:54.673 LIB libspdk_bdev_compress.a 00:02:54.673 LIB libspdk_bdev_malloc.a 00:02:54.673 LIB libspdk_bdev_iscsi.a 00:02:54.673 LIB libspdk_bdev_delay.a 00:02:54.673 SO libspdk_bdev_zone_block.so.6.0 00:02:54.673 SO libspdk_bdev_crypto.so.6.0 00:02:54.673 SYMLINK libspdk_bdev_error.so 00:02:54.673 SO libspdk_bdev_compress.so.6.0 00:02:54.673 SO libspdk_bdev_iscsi.so.6.0 00:02:54.673 SO libspdk_bdev_delay.so.6.0 00:02:54.673 SO libspdk_bdev_malloc.so.6.0 00:02:54.673 SYMLINK libspdk_bdev_passthru.so 00:02:54.673 SYMLINK libspdk_bdev_aio.so 00:02:54.673 SYMLINK libspdk_bdev_zone_block.so 00:02:54.673 LIB libspdk_accel_dpdk_cryptodev.a 00:02:54.673 SYMLINK libspdk_bdev_compress.so 00:02:54.673 SYMLINK libspdk_bdev_delay.so 00:02:54.673 SYMLINK libspdk_bdev_iscsi.so 00:02:54.673 SYMLINK libspdk_bdev_malloc.so 00:02:54.673 LIB libspdk_bdev_virtio.a 00:02:54.673 SO libspdk_accel_dpdk_cryptodev.so.3.0 00:02:54.673 LIB libspdk_bdev_lvol.a 00:02:54.673 SYMLINK libspdk_bdev_crypto.so 00:02:54.673 SO libspdk_bdev_virtio.so.6.0 00:02:54.935 SO libspdk_bdev_lvol.so.6.0 00:02:54.935 SYMLINK libspdk_bdev_virtio.so 00:02:54.935 SYMLINK libspdk_bdev_lvol.so 00:02:54.935 SYMLINK libspdk_accel_dpdk_cryptodev.so 00:02:55.502 LIB libspdk_bdev_raid.a 00:02:55.502 SO libspdk_bdev_raid.so.6.0 00:02:55.502 SYMLINK libspdk_bdev_raid.so 00:02:56.880 LIB libspdk_bdev_nvme.a 00:02:56.880 SO libspdk_bdev_nvme.so.7.0 00:02:57.141 SYMLINK libspdk_bdev_nvme.so 00:02:57.710 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:57.710 CC module/event/subsystems/iobuf/iobuf.o 00:02:57.710 CC module/event/subsystems/vmd/vmd.o 00:02:57.710 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:57.710 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:57.710 CC module/event/subsystems/scheduler/scheduler.o 00:02:57.710 CC module/event/subsystems/keyring/keyring.o 00:02:57.710 CC module/event/subsystems/sock/sock.o 00:02:57.969 LIB libspdk_event_iobuf.a 00:02:57.969 LIB libspdk_event_vmd.a 00:02:57.969 LIB libspdk_event_keyring.a 00:02:57.969 LIB libspdk_event_vhost_blk.a 00:02:57.969 LIB libspdk_event_scheduler.a 00:02:57.969 LIB libspdk_event_sock.a 00:02:57.969 SO libspdk_event_keyring.so.1.0 00:02:57.969 SO libspdk_event_vmd.so.6.0 00:02:57.969 SO libspdk_event_iobuf.so.3.0 00:02:57.969 SO libspdk_event_vhost_blk.so.3.0 00:02:57.969 SO libspdk_event_scheduler.so.4.0 00:02:57.969 SO libspdk_event_sock.so.5.0 00:02:57.969 SYMLINK libspdk_event_keyring.so 00:02:57.969 SYMLINK libspdk_event_vmd.so 00:02:57.969 SYMLINK libspdk_event_iobuf.so 00:02:57.969 SYMLINK libspdk_event_vhost_blk.so 00:02:58.228 SYMLINK libspdk_event_scheduler.so 00:02:58.228 SYMLINK libspdk_event_sock.so 00:02:58.488 CC module/event/subsystems/accel/accel.o 00:02:58.488 LIB libspdk_event_accel.a 00:02:58.747 SO libspdk_event_accel.so.6.0 00:02:58.747 SYMLINK libspdk_event_accel.so 00:02:59.007 CC module/event/subsystems/bdev/bdev.o 00:02:59.266 LIB libspdk_event_bdev.a 00:02:59.266 SO libspdk_event_bdev.so.6.0 00:02:59.526 SYMLINK libspdk_event_bdev.so 00:02:59.785 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:59.785 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:59.785 CC module/event/subsystems/nbd/nbd.o 00:02:59.785 CC module/event/subsystems/scsi/scsi.o 00:02:59.785 CC module/event/subsystems/ublk/ublk.o 00:03:00.044 LIB libspdk_event_nbd.a 00:03:00.044 LIB libspdk_event_ublk.a 00:03:00.044 LIB libspdk_event_scsi.a 00:03:00.044 SO libspdk_event_nbd.so.6.0 00:03:00.044 SO libspdk_event_ublk.so.3.0 00:03:00.044 SO libspdk_event_scsi.so.6.0 00:03:00.044 LIB libspdk_event_nvmf.a 00:03:00.044 SYMLINK libspdk_event_nbd.so 00:03:00.044 SYMLINK libspdk_event_ublk.so 00:03:00.044 SO libspdk_event_nvmf.so.6.0 00:03:00.044 SYMLINK libspdk_event_scsi.so 00:03:00.044 SYMLINK libspdk_event_nvmf.so 00:03:00.303 CC module/event/subsystems/iscsi/iscsi.o 00:03:00.563 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:00.563 LIB libspdk_event_vhost_scsi.a 00:03:00.563 LIB libspdk_event_iscsi.a 00:03:00.563 SO libspdk_event_vhost_scsi.so.3.0 00:03:00.563 SO libspdk_event_iscsi.so.6.0 00:03:00.823 SYMLINK libspdk_event_vhost_scsi.so 00:03:00.823 SYMLINK libspdk_event_iscsi.so 00:03:00.823 SO libspdk.so.6.0 00:03:00.823 SYMLINK libspdk.so 00:03:01.403 CC app/trace_record/trace_record.o 00:03:01.403 TEST_HEADER include/spdk/accel.h 00:03:01.403 TEST_HEADER include/spdk/assert.h 00:03:01.403 TEST_HEADER include/spdk/accel_module.h 00:03:01.403 TEST_HEADER include/spdk/barrier.h 00:03:01.403 CC app/spdk_nvme_perf/perf.o 00:03:01.403 TEST_HEADER include/spdk/base64.h 00:03:01.403 TEST_HEADER include/spdk/bdev_module.h 00:03:01.403 CC app/spdk_nvme_identify/identify.o 00:03:01.403 CC test/rpc_client/rpc_client_test.o 00:03:01.403 TEST_HEADER include/spdk/bdev.h 00:03:01.403 CXX app/trace/trace.o 00:03:01.403 TEST_HEADER include/spdk/bdev_zone.h 00:03:01.403 TEST_HEADER include/spdk/bit_array.h 00:03:01.403 TEST_HEADER include/spdk/bit_pool.h 00:03:01.403 CC app/spdk_top/spdk_top.o 00:03:01.403 TEST_HEADER include/spdk/blob_bdev.h 00:03:01.403 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:01.403 TEST_HEADER include/spdk/blobfs.h 00:03:01.403 TEST_HEADER include/spdk/conf.h 00:03:01.403 CC app/spdk_nvme_discover/discovery_aer.o 00:03:01.403 TEST_HEADER include/spdk/blob.h 00:03:01.403 CC app/spdk_lspci/spdk_lspci.o 00:03:01.403 TEST_HEADER include/spdk/config.h 00:03:01.403 TEST_HEADER include/spdk/cpuset.h 00:03:01.403 TEST_HEADER include/spdk/crc16.h 00:03:01.403 TEST_HEADER include/spdk/crc32.h 00:03:01.403 TEST_HEADER include/spdk/crc64.h 00:03:01.403 TEST_HEADER include/spdk/dif.h 00:03:01.403 TEST_HEADER include/spdk/endian.h 00:03:01.403 TEST_HEADER include/spdk/dma.h 00:03:01.403 TEST_HEADER include/spdk/env_dpdk.h 00:03:01.403 TEST_HEADER include/spdk/env.h 00:03:01.403 TEST_HEADER include/spdk/fd_group.h 00:03:01.403 TEST_HEADER include/spdk/event.h 00:03:01.403 TEST_HEADER include/spdk/fd.h 00:03:01.403 TEST_HEADER include/spdk/file.h 00:03:01.403 TEST_HEADER include/spdk/ftl.h 00:03:01.403 TEST_HEADER include/spdk/gpt_spec.h 00:03:01.403 TEST_HEADER include/spdk/histogram_data.h 00:03:01.403 TEST_HEADER include/spdk/hexlify.h 00:03:01.403 TEST_HEADER include/spdk/idxd.h 00:03:01.403 TEST_HEADER include/spdk/idxd_spec.h 00:03:01.403 TEST_HEADER include/spdk/ioat.h 00:03:01.403 TEST_HEADER include/spdk/iscsi_spec.h 00:03:01.403 TEST_HEADER include/spdk/ioat_spec.h 00:03:01.403 TEST_HEADER include/spdk/init.h 00:03:01.403 TEST_HEADER include/spdk/jsonrpc.h 00:03:01.403 TEST_HEADER include/spdk/json.h 00:03:01.403 CC app/nvmf_tgt/nvmf_main.o 00:03:01.403 TEST_HEADER include/spdk/keyring.h 00:03:01.403 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:01.403 TEST_HEADER include/spdk/keyring_module.h 00:03:01.403 TEST_HEADER include/spdk/likely.h 00:03:01.403 TEST_HEADER include/spdk/log.h 00:03:01.403 TEST_HEADER include/spdk/lvol.h 00:03:01.403 TEST_HEADER include/spdk/memory.h 00:03:01.403 TEST_HEADER include/spdk/mmio.h 00:03:01.403 CC app/spdk_dd/spdk_dd.o 00:03:01.403 TEST_HEADER include/spdk/net.h 00:03:01.403 TEST_HEADER include/spdk/nbd.h 00:03:01.403 TEST_HEADER include/spdk/notify.h 00:03:01.403 TEST_HEADER include/spdk/nvme.h 00:03:01.403 TEST_HEADER include/spdk/nvme_intel.h 00:03:01.403 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:01.403 TEST_HEADER include/spdk/nvme_spec.h 00:03:01.403 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:01.403 TEST_HEADER include/spdk/nvme_zns.h 00:03:01.403 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:01.403 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:01.403 TEST_HEADER include/spdk/nvmf_spec.h 00:03:01.403 TEST_HEADER include/spdk/nvmf.h 00:03:01.403 TEST_HEADER include/spdk/nvmf_transport.h 00:03:01.403 TEST_HEADER include/spdk/opal_spec.h 00:03:01.403 TEST_HEADER include/spdk/opal.h 00:03:01.403 TEST_HEADER include/spdk/pci_ids.h 00:03:01.403 TEST_HEADER include/spdk/pipe.h 00:03:01.403 TEST_HEADER include/spdk/queue.h 00:03:01.403 TEST_HEADER include/spdk/reduce.h 00:03:01.403 CC app/iscsi_tgt/iscsi_tgt.o 00:03:01.403 TEST_HEADER include/spdk/scheduler.h 00:03:01.403 TEST_HEADER include/spdk/rpc.h 00:03:01.403 TEST_HEADER include/spdk/scsi.h 00:03:01.403 TEST_HEADER include/spdk/sock.h 00:03:01.403 TEST_HEADER include/spdk/scsi_spec.h 00:03:01.403 TEST_HEADER include/spdk/thread.h 00:03:01.403 TEST_HEADER include/spdk/string.h 00:03:01.403 TEST_HEADER include/spdk/stdinc.h 00:03:01.403 TEST_HEADER include/spdk/trace_parser.h 00:03:01.403 TEST_HEADER include/spdk/trace.h 00:03:01.403 TEST_HEADER include/spdk/tree.h 00:03:01.403 TEST_HEADER include/spdk/ublk.h 00:03:01.403 TEST_HEADER include/spdk/util.h 00:03:01.403 TEST_HEADER include/spdk/uuid.h 00:03:01.403 TEST_HEADER include/spdk/version.h 00:03:01.403 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:01.403 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:01.403 TEST_HEADER include/spdk/vhost.h 00:03:01.403 TEST_HEADER include/spdk/vmd.h 00:03:01.403 TEST_HEADER include/spdk/zipf.h 00:03:01.403 CXX test/cpp_headers/accel_module.o 00:03:01.403 CXX test/cpp_headers/accel.o 00:03:01.403 TEST_HEADER include/spdk/xor.h 00:03:01.403 CXX test/cpp_headers/base64.o 00:03:01.403 CXX test/cpp_headers/barrier.o 00:03:01.403 CXX test/cpp_headers/assert.o 00:03:01.403 CXX test/cpp_headers/bdev.o 00:03:01.403 CXX test/cpp_headers/bdev_zone.o 00:03:01.403 CXX test/cpp_headers/bdev_module.o 00:03:01.403 CXX test/cpp_headers/bit_pool.o 00:03:01.403 CXX test/cpp_headers/bit_array.o 00:03:01.403 CXX test/cpp_headers/blob_bdev.o 00:03:01.403 CXX test/cpp_headers/blobfs_bdev.o 00:03:01.403 CC app/spdk_tgt/spdk_tgt.o 00:03:01.403 CXX test/cpp_headers/conf.o 00:03:01.403 CXX test/cpp_headers/blobfs.o 00:03:01.403 CXX test/cpp_headers/blob.o 00:03:01.403 CXX test/cpp_headers/config.o 00:03:01.403 CXX test/cpp_headers/cpuset.o 00:03:01.403 CXX test/cpp_headers/crc16.o 00:03:01.403 CXX test/cpp_headers/crc32.o 00:03:01.403 CXX test/cpp_headers/crc64.o 00:03:01.403 CXX test/cpp_headers/dma.o 00:03:01.403 CXX test/cpp_headers/dif.o 00:03:01.403 CXX test/cpp_headers/env_dpdk.o 00:03:01.403 CXX test/cpp_headers/endian.o 00:03:01.403 CXX test/cpp_headers/env.o 00:03:01.403 CXX test/cpp_headers/event.o 00:03:01.403 CXX test/cpp_headers/fd_group.o 00:03:01.403 CXX test/cpp_headers/fd.o 00:03:01.403 CXX test/cpp_headers/file.o 00:03:01.403 CXX test/cpp_headers/ftl.o 00:03:01.403 CXX test/cpp_headers/gpt_spec.o 00:03:01.403 CXX test/cpp_headers/hexlify.o 00:03:01.403 CXX test/cpp_headers/histogram_data.o 00:03:01.403 CXX test/cpp_headers/idxd.o 00:03:01.403 CXX test/cpp_headers/init.o 00:03:01.403 CXX test/cpp_headers/idxd_spec.o 00:03:01.403 CXX test/cpp_headers/ioat.o 00:03:01.403 CXX test/cpp_headers/ioat_spec.o 00:03:01.403 CXX test/cpp_headers/json.o 00:03:01.403 CXX test/cpp_headers/iscsi_spec.o 00:03:01.403 CXX test/cpp_headers/jsonrpc.o 00:03:01.403 CXX test/cpp_headers/keyring.o 00:03:01.403 CXX test/cpp_headers/likely.o 00:03:01.403 CXX test/cpp_headers/keyring_module.o 00:03:01.403 CXX test/cpp_headers/log.o 00:03:01.403 CXX test/cpp_headers/lvol.o 00:03:01.403 CXX test/cpp_headers/memory.o 00:03:01.403 CXX test/cpp_headers/nbd.o 00:03:01.403 CXX test/cpp_headers/mmio.o 00:03:01.403 CXX test/cpp_headers/net.o 00:03:01.403 CXX test/cpp_headers/nvme.o 00:03:01.403 CXX test/cpp_headers/notify.o 00:03:01.404 CXX test/cpp_headers/nvme_intel.o 00:03:01.404 CXX test/cpp_headers/nvme_ocssd.o 00:03:01.404 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:01.404 CXX test/cpp_headers/nvme_spec.o 00:03:01.404 CXX test/cpp_headers/nvmf_cmd.o 00:03:01.404 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:01.404 CXX test/cpp_headers/nvme_zns.o 00:03:01.404 CXX test/cpp_headers/nvmf.o 00:03:01.404 CXX test/cpp_headers/nvmf_spec.o 00:03:01.404 CXX test/cpp_headers/nvmf_transport.o 00:03:01.404 CXX test/cpp_headers/opal.o 00:03:01.404 CXX test/cpp_headers/opal_spec.o 00:03:01.404 CC test/app/jsoncat/jsoncat.o 00:03:01.404 CXX test/cpp_headers/pci_ids.o 00:03:01.404 CXX test/cpp_headers/pipe.o 00:03:01.404 CXX test/cpp_headers/queue.o 00:03:01.404 CXX test/cpp_headers/reduce.o 00:03:01.404 CXX test/cpp_headers/rpc.o 00:03:01.404 CXX test/cpp_headers/scheduler.o 00:03:01.404 CXX test/cpp_headers/scsi.o 00:03:01.404 CC test/app/histogram_perf/histogram_perf.o 00:03:01.404 CXX test/cpp_headers/scsi_spec.o 00:03:01.404 CC test/app/stub/stub.o 00:03:01.404 CXX test/cpp_headers/stdinc.o 00:03:01.404 CXX test/cpp_headers/sock.o 00:03:01.683 CXX test/cpp_headers/string.o 00:03:01.683 CXX test/cpp_headers/thread.o 00:03:01.683 CXX test/cpp_headers/trace_parser.o 00:03:01.683 CXX test/cpp_headers/trace.o 00:03:01.683 CXX test/cpp_headers/tree.o 00:03:01.683 CC test/env/vtophys/vtophys.o 00:03:01.683 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:01.683 CXX test/cpp_headers/ublk.o 00:03:01.683 CC test/env/memory/memory_ut.o 00:03:01.683 CXX test/cpp_headers/util.o 00:03:01.683 CC test/thread/poller_perf/poller_perf.o 00:03:01.683 CC test/env/pci/pci_ut.o 00:03:01.683 CC examples/util/zipf/zipf.o 00:03:01.683 CC examples/ioat/perf/perf.o 00:03:01.683 CC examples/ioat/verify/verify.o 00:03:01.683 CXX test/cpp_headers/uuid.o 00:03:01.683 CC app/fio/nvme/fio_plugin.o 00:03:01.683 CC test/dma/test_dma/test_dma.o 00:03:01.683 CC test/app/bdev_svc/bdev_svc.o 00:03:01.683 CC app/fio/bdev/fio_plugin.o 00:03:01.683 CXX test/cpp_headers/version.o 00:03:01.683 CXX test/cpp_headers/vfio_user_pci.o 00:03:01.963 LINK spdk_lspci 00:03:01.964 CXX test/cpp_headers/vfio_user_spec.o 00:03:01.964 LINK rpc_client_test 00:03:02.235 LINK interrupt_tgt 00:03:02.235 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:02.235 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:02.235 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:02.235 CC test/env/mem_callbacks/mem_callbacks.o 00:03:02.235 LINK spdk_nvme_discover 00:03:02.235 LINK jsoncat 00:03:02.235 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:02.235 LINK vtophys 00:03:02.235 LINK histogram_perf 00:03:02.235 LINK poller_perf 00:03:02.235 CXX test/cpp_headers/vhost.o 00:03:02.235 LINK iscsi_tgt 00:03:02.235 LINK nvmf_tgt 00:03:02.235 CXX test/cpp_headers/vmd.o 00:03:02.235 CXX test/cpp_headers/xor.o 00:03:02.235 CXX test/cpp_headers/zipf.o 00:03:02.235 LINK env_dpdk_post_init 00:03:02.494 LINK stub 00:03:02.494 LINK zipf 00:03:02.494 LINK spdk_trace_record 00:03:02.494 LINK bdev_svc 00:03:02.495 LINK ioat_perf 00:03:02.495 LINK verify 00:03:02.495 LINK spdk_dd 00:03:02.768 LINK spdk_trace 00:03:02.768 LINK spdk_tgt 00:03:02.768 LINK test_dma 00:03:02.768 LINK pci_ut 00:03:02.768 LINK spdk_bdev 00:03:02.768 LINK nvme_fuzz 00:03:03.062 LINK vhost_fuzz 00:03:03.062 CC test/event/event_perf/event_perf.o 00:03:03.062 CC test/event/reactor/reactor.o 00:03:03.062 CC test/event/reactor_perf/reactor_perf.o 00:03:03.062 CC examples/vmd/led/led.o 00:03:03.062 CC examples/vmd/lsvmd/lsvmd.o 00:03:03.062 LINK spdk_nvme 00:03:03.062 CC test/event/app_repeat/app_repeat.o 00:03:03.062 CC examples/idxd/perf/perf.o 00:03:03.062 CC examples/sock/hello_world/hello_sock.o 00:03:03.062 CC test/event/scheduler/scheduler.o 00:03:03.062 CC examples/thread/thread/thread_ex.o 00:03:03.062 LINK spdk_nvme_identify 00:03:03.062 LINK mem_callbacks 00:03:03.062 CC app/vhost/vhost.o 00:03:03.062 LINK event_perf 00:03:03.062 LINK reactor_perf 00:03:03.062 LINK spdk_nvme_perf 00:03:03.062 LINK lsvmd 00:03:03.062 LINK reactor 00:03:03.062 LINK led 00:03:03.062 LINK app_repeat 00:03:03.335 LINK spdk_top 00:03:03.335 LINK scheduler 00:03:03.335 LINK thread 00:03:03.335 CC test/nvme/aer/aer.o 00:03:03.335 LINK vhost 00:03:03.335 LINK hello_sock 00:03:03.335 CC test/nvme/reset/reset.o 00:03:03.335 CC test/nvme/sgl/sgl.o 00:03:03.335 CC test/nvme/reserve/reserve.o 00:03:03.335 CC test/nvme/compliance/nvme_compliance.o 00:03:03.335 CC test/nvme/simple_copy/simple_copy.o 00:03:03.335 CC test/nvme/fdp/fdp.o 00:03:03.335 CC test/nvme/err_injection/err_injection.o 00:03:03.335 CC test/nvme/cuse/cuse.o 00:03:03.335 CC test/nvme/overhead/overhead.o 00:03:03.335 CC test/nvme/startup/startup.o 00:03:03.335 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:03.335 CC test/nvme/e2edp/nvme_dp.o 00:03:03.335 CC test/nvme/fused_ordering/fused_ordering.o 00:03:03.335 CC test/nvme/connect_stress/connect_stress.o 00:03:03.335 CC test/nvme/boot_partition/boot_partition.o 00:03:03.335 CC test/blobfs/mkfs/mkfs.o 00:03:03.335 CC test/accel/dif/dif.o 00:03:03.335 LINK idxd_perf 00:03:03.595 CC test/lvol/esnap/esnap.o 00:03:03.595 LINK memory_ut 00:03:03.595 LINK boot_partition 00:03:03.595 LINK err_injection 00:03:03.595 LINK startup 00:03:03.595 LINK doorbell_aers 00:03:03.595 LINK reserve 00:03:03.595 LINK mkfs 00:03:03.595 LINK connect_stress 00:03:03.595 LINK fused_ordering 00:03:03.595 LINK simple_copy 00:03:03.595 LINK fdp 00:03:03.595 LINK sgl 00:03:03.595 LINK reset 00:03:03.595 LINK aer 00:03:03.595 LINK nvme_dp 00:03:03.595 LINK overhead 00:03:03.854 LINK nvme_compliance 00:03:03.854 CC examples/nvme/arbitration/arbitration.o 00:03:03.854 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:03.854 CC examples/nvme/hello_world/hello_world.o 00:03:03.854 CC examples/nvme/abort/abort.o 00:03:03.854 CC examples/nvme/reconnect/reconnect.o 00:03:03.854 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:03.854 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:03.854 CC examples/nvme/hotplug/hotplug.o 00:03:03.854 CC examples/accel/perf/accel_perf.o 00:03:03.854 CC examples/blob/hello_world/hello_blob.o 00:03:03.854 CC examples/blob/cli/blobcli.o 00:03:03.854 LINK dif 00:03:04.113 LINK pmr_persistence 00:03:04.113 LINK cmb_copy 00:03:04.113 LINK hotplug 00:03:04.113 LINK hello_world 00:03:04.113 LINK hello_blob 00:03:04.113 LINK arbitration 00:03:04.372 LINK reconnect 00:03:04.372 LINK abort 00:03:04.372 LINK blobcli 00:03:04.372 LINK iscsi_fuzz 00:03:04.372 LINK nvme_manage 00:03:04.372 LINK accel_perf 00:03:04.643 CC test/bdev/bdevio/bdevio.o 00:03:04.907 LINK bdevio 00:03:05.165 CC examples/bdev/hello_world/hello_bdev.o 00:03:05.165 CC examples/bdev/bdevperf/bdevperf.o 00:03:05.424 LINK hello_bdev 00:03:05.993 LINK bdevperf 00:03:06.561 LINK cuse 00:03:06.561 CC examples/nvmf/nvmf/nvmf.o 00:03:07.128 LINK nvmf 00:03:09.663 LINK esnap 00:03:10.232 00:03:10.232 real 1m30.572s 00:03:10.232 user 16m28.983s 00:03:10.232 sys 5m31.578s 00:03:10.232 03:58:18 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:10.232 03:58:18 make -- common/autotest_common.sh@10 -- $ set +x 00:03:10.232 ************************************ 00:03:10.232 END TEST make 00:03:10.232 ************************************ 00:03:10.232 03:58:18 -- common/autotest_common.sh@1142 -- $ return 0 00:03:10.232 03:58:18 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:10.232 03:58:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:10.232 03:58:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:10.232 03:58:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.232 03:58:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:10.232 03:58:18 -- pm/common@44 -- $ pid=2396182 00:03:10.232 03:58:18 -- pm/common@50 -- $ kill -TERM 2396182 00:03:10.232 03:58:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.232 03:58:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:10.232 03:58:18 -- pm/common@44 -- $ pid=2396183 00:03:10.232 03:58:18 -- pm/common@50 -- $ kill -TERM 2396183 00:03:10.232 03:58:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.232 03:58:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:10.232 03:58:18 -- pm/common@44 -- $ pid=2396185 00:03:10.232 03:58:18 -- pm/common@50 -- $ kill -TERM 2396185 00:03:10.232 03:58:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.232 03:58:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:10.232 03:58:18 -- pm/common@44 -- $ pid=2396209 00:03:10.232 03:58:18 -- pm/common@50 -- $ sudo -E kill -TERM 2396209 00:03:10.232 03:58:18 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:03:10.232 03:58:18 -- nvmf/common.sh@7 -- # uname -s 00:03:10.232 03:58:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:10.232 03:58:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:10.232 03:58:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:10.232 03:58:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:10.232 03:58:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:10.232 03:58:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:10.232 03:58:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:10.232 03:58:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:10.232 03:58:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:10.232 03:58:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:10.232 03:58:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:03:10.232 03:58:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:03:10.232 03:58:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:10.232 03:58:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:10.232 03:58:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:10.232 03:58:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:10.232 03:58:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:03:10.232 03:58:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:10.232 03:58:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:10.232 03:58:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:10.232 03:58:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.232 03:58:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.232 03:58:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.232 03:58:18 -- paths/export.sh@5 -- # export PATH 00:03:10.232 03:58:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.232 03:58:18 -- nvmf/common.sh@47 -- # : 0 00:03:10.232 03:58:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:10.232 03:58:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:10.232 03:58:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:10.232 03:58:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:10.232 03:58:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:10.232 03:58:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:10.232 03:58:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:10.232 03:58:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:10.232 03:58:18 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:10.232 03:58:18 -- spdk/autotest.sh@32 -- # uname -s 00:03:10.232 03:58:18 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:10.232 03:58:18 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:10.232 03:58:18 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:03:10.232 03:58:18 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:10.232 03:58:18 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:03:10.232 03:58:18 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:10.232 03:58:18 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:10.232 03:58:18 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:10.232 03:58:18 -- spdk/autotest.sh@48 -- # udevadm_pid=2467866 00:03:10.232 03:58:18 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:10.232 03:58:18 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:10.232 03:58:18 -- pm/common@17 -- # local monitor 00:03:10.232 03:58:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.232 03:58:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.232 03:58:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.232 03:58:18 -- pm/common@21 -- # date +%s 00:03:10.232 03:58:18 -- pm/common@21 -- # date +%s 00:03:10.232 03:58:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.232 03:58:18 -- pm/common@25 -- # sleep 1 00:03:10.232 03:58:18 -- pm/common@21 -- # date +%s 00:03:10.232 03:58:18 -- pm/common@21 -- # date +%s 00:03:10.232 03:58:18 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721699898 00:03:10.232 03:58:18 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721699898 00:03:10.232 03:58:18 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721699898 00:03:10.232 03:58:18 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721699898 00:03:10.232 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721699898_collect-vmstat.pm.log 00:03:10.232 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721699898_collect-cpu-load.pm.log 00:03:10.492 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721699898_collect-cpu-temp.pm.log 00:03:10.492 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721699898_collect-bmc-pm.bmc.pm.log 00:03:11.440 03:58:19 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:11.440 03:58:19 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:11.440 03:58:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:11.440 03:58:19 -- common/autotest_common.sh@10 -- # set +x 00:03:11.440 03:58:19 -- spdk/autotest.sh@59 -- # create_test_list 00:03:11.440 03:58:19 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:11.440 03:58:19 -- common/autotest_common.sh@10 -- # set +x 00:03:11.440 03:58:20 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/autotest.sh 00:03:11.440 03:58:20 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk 00:03:11.440 03:58:20 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:03:11.440 03:58:20 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:03:11.440 03:58:20 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:03:11.440 03:58:20 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:11.440 03:58:20 -- common/autotest_common.sh@1455 -- # uname 00:03:11.440 03:58:20 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:11.440 03:58:20 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:11.440 03:58:20 -- common/autotest_common.sh@1475 -- # uname 00:03:11.440 03:58:20 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:11.440 03:58:20 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:11.440 03:58:20 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:11.440 03:58:20 -- spdk/autotest.sh@72 -- # hash lcov 00:03:11.440 03:58:20 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:11.440 03:58:20 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:11.440 --rc lcov_branch_coverage=1 00:03:11.440 --rc lcov_function_coverage=1 00:03:11.440 --rc genhtml_branch_coverage=1 00:03:11.440 --rc genhtml_function_coverage=1 00:03:11.440 --rc genhtml_legend=1 00:03:11.440 --rc geninfo_all_blocks=1 00:03:11.440 ' 00:03:11.440 03:58:20 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:11.440 --rc lcov_branch_coverage=1 00:03:11.440 --rc lcov_function_coverage=1 00:03:11.440 --rc genhtml_branch_coverage=1 00:03:11.440 --rc genhtml_function_coverage=1 00:03:11.440 --rc genhtml_legend=1 00:03:11.440 --rc geninfo_all_blocks=1 00:03:11.440 ' 00:03:11.440 03:58:20 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:11.440 --rc lcov_branch_coverage=1 00:03:11.440 --rc lcov_function_coverage=1 00:03:11.440 --rc genhtml_branch_coverage=1 00:03:11.440 --rc genhtml_function_coverage=1 00:03:11.440 --rc genhtml_legend=1 00:03:11.440 --rc geninfo_all_blocks=1 00:03:11.440 --no-external' 00:03:11.440 03:58:20 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:11.440 --rc lcov_branch_coverage=1 00:03:11.440 --rc lcov_function_coverage=1 00:03:11.440 --rc genhtml_branch_coverage=1 00:03:11.440 --rc genhtml_function_coverage=1 00:03:11.440 --rc genhtml_legend=1 00:03:11.440 --rc geninfo_all_blocks=1 00:03:11.440 --no-external' 00:03:11.440 03:58:20 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:11.440 lcov: LCOV version 1.14 00:03:11.440 03:58:20 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/crypto-phy-autotest/spdk -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:23.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:23.647 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:23.648 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:23.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:23.907 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:23.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:23.907 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:23.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:23.907 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:23.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:23.907 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:23.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:23.907 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:23.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:23.907 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:23.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:23.907 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:23.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:23.907 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:23.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:23.907 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:23.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:23.907 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:23.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:23.907 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:23.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:23.907 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:23.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:23.907 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:23.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:23.907 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:23.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:23.907 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:23.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:23.907 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:23.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:23.907 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:23.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:36.112 /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:36.112 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:44.234 03:58:53 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:44.234 03:58:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:44.234 03:58:53 -- common/autotest_common.sh@10 -- # set +x 00:03:44.234 03:58:53 -- spdk/autotest.sh@91 -- # rm -f 00:03:44.234 03:58:53 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:48.524 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:48.524 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:48.524 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:48.524 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:48.524 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:48.524 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:48.524 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:48.524 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:48.524 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:48.524 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:48.524 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:48.524 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:48.524 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:48.524 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:48.524 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:48.783 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:48.783 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:48.783 03:58:57 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:48.783 03:58:57 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:48.783 03:58:57 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:48.783 03:58:57 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:48.783 03:58:57 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:48.783 03:58:57 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:48.783 03:58:57 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:48.783 03:58:57 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:48.783 03:58:57 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:48.783 03:58:57 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:48.783 03:58:57 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:48.783 03:58:57 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:48.783 03:58:57 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:48.783 03:58:57 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:48.783 03:58:57 -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:48.783 No valid GPT data, bailing 00:03:48.783 03:58:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:48.783 03:58:57 -- scripts/common.sh@391 -- # pt= 00:03:48.783 03:58:57 -- scripts/common.sh@392 -- # return 1 00:03:48.783 03:58:57 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:48.783 1+0 records in 00:03:48.783 1+0 records out 00:03:48.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00469282 s, 223 MB/s 00:03:48.783 03:58:57 -- spdk/autotest.sh@118 -- # sync 00:03:48.783 03:58:57 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:48.783 03:58:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:48.783 03:58:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:55.350 03:59:03 -- spdk/autotest.sh@124 -- # uname -s 00:03:55.350 03:59:03 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:55.350 03:59:04 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:03:55.350 03:59:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.350 03:59:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.350 03:59:04 -- common/autotest_common.sh@10 -- # set +x 00:03:55.350 ************************************ 00:03:55.350 START TEST setup.sh 00:03:55.350 ************************************ 00:03:55.350 03:59:04 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:03:55.350 * Looking for test storage... 00:03:55.350 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:55.350 03:59:04 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:55.350 03:59:04 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:55.350 03:59:04 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:03:55.350 03:59:04 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.350 03:59:04 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.350 03:59:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:55.608 ************************************ 00:03:55.608 START TEST acl 00:03:55.608 ************************************ 00:03:55.608 03:59:04 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:03:55.608 * Looking for test storage... 00:03:55.608 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:55.608 03:59:04 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:55.608 03:59:04 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:55.608 03:59:04 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:55.608 03:59:04 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:55.608 03:59:04 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:55.608 03:59:04 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:55.608 03:59:04 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:55.608 03:59:04 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:55.608 03:59:04 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:55.608 03:59:04 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:55.608 03:59:04 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:55.608 03:59:04 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:55.608 03:59:04 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:55.608 03:59:04 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:55.608 03:59:04 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.608 03:59:04 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:00.878 03:59:08 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:00.878 03:59:08 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:00.878 03:59:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.878 03:59:08 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:00.878 03:59:08 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.878 03:59:08 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:04:04.167 Hugepages 00:04:04.167 node hugesize free / total 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.167 00:04:04.167 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:04.167 03:59:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.168 03:59:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.427 03:59:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:04:04.427 03:59:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:04.427 03:59:13 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:04.427 03:59:13 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:04.427 03:59:13 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:04.427 03:59:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.427 03:59:13 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:04.427 03:59:13 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:04.427 03:59:13 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.427 03:59:13 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.427 03:59:13 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:04.427 ************************************ 00:04:04.427 START TEST denied 00:04:04.427 ************************************ 00:04:04.427 03:59:13 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:04.427 03:59:13 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:04:04.427 03:59:13 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:04.427 03:59:13 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:04:04.427 03:59:13 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.427 03:59:13 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:08.623 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:04:08.623 03:59:16 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:04:08.623 03:59:16 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:08.623 03:59:16 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:08.623 03:59:16 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:04:08.623 03:59:16 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:04:08.623 03:59:16 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:08.623 03:59:16 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:08.623 03:59:16 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:08.623 03:59:16 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.623 03:59:16 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.898 00:04:13.898 real 0m9.200s 00:04:13.898 user 0m2.730s 00:04:13.898 sys 0m5.610s 00:04:13.898 03:59:22 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.898 03:59:22 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:13.898 ************************************ 00:04:13.898 END TEST denied 00:04:13.898 ************************************ 00:04:13.898 03:59:22 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:13.898 03:59:22 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:13.898 03:59:22 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.898 03:59:22 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.898 03:59:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:13.898 ************************************ 00:04:13.898 START TEST allowed 00:04:13.898 ************************************ 00:04:13.898 03:59:22 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:13.898 03:59:22 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:04:13.898 03:59:22 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:13.898 03:59:22 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:04:13.898 03:59:22 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.898 03:59:22 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:20.466 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:20.466 03:59:28 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:20.466 03:59:28 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:20.466 03:59:28 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:20.466 03:59:28 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.466 03:59:28 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:24.705 00:04:24.705 real 0m10.795s 00:04:24.705 user 0m2.985s 00:04:24.705 sys 0m6.011s 00:04:24.705 03:59:33 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.705 03:59:33 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:24.705 ************************************ 00:04:24.705 END TEST allowed 00:04:24.705 ************************************ 00:04:24.705 03:59:33 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:24.705 00:04:24.705 real 0m29.053s 00:04:24.705 user 0m9.002s 00:04:24.705 sys 0m17.716s 00:04:24.705 03:59:33 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.706 03:59:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:24.706 ************************************ 00:04:24.706 END TEST acl 00:04:24.706 ************************************ 00:04:24.706 03:59:33 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:24.706 03:59:33 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:04:24.706 03:59:33 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.706 03:59:33 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.706 03:59:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:24.706 ************************************ 00:04:24.706 START TEST hugepages 00:04:24.706 ************************************ 00:04:24.706 03:59:33 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:04:24.706 * Looking for test storage... 00:04:24.706 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41597216 kB' 'MemAvailable: 45088080 kB' 'Buffers: 12524 kB' 'Cached: 10513524 kB' 'SwapCached: 0 kB' 'Active: 7519672 kB' 'Inactive: 3457800 kB' 'Active(anon): 7126312 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 454628 kB' 'Mapped: 220196 kB' 'Shmem: 6674888 kB' 'KReclaimable: 273684 kB' 'Slab: 890664 kB' 'SReclaimable: 273684 kB' 'SUnreclaim: 616980 kB' 'KernelStack: 21984 kB' 'PageTables: 9188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439060 kB' 'Committed_AS: 8562356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216332 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.706 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.707 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.708 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:24.709 03:59:33 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:24.709 03:59:33 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.709 03:59:33 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.709 03:59:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:24.709 ************************************ 00:04:24.709 START TEST default_setup 00:04:24.709 ************************************ 00:04:24.709 03:59:33 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:24.709 03:59:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:24.709 03:59:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:24.709 03:59:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:24.709 03:59:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:24.709 03:59:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:24.709 03:59:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:24.709 03:59:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.709 03:59:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:24.709 03:59:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:24.709 03:59:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:24.709 03:59:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.709 03:59:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:24.709 03:59:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:24.709 03:59:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.709 03:59:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.709 03:59:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:24.709 03:59:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:24.709 03:59:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:24.709 03:59:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:24.971 03:59:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:24.971 03:59:33 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.971 03:59:33 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:29.167 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:29.167 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:29.167 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:29.167 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:29.167 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:29.167 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:29.167 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:29.167 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:29.167 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:29.167 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:29.167 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:29.167 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:29.167 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:29.167 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:29.167 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:29.167 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:31.082 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43741772 kB' 'MemAvailable: 47232248 kB' 'Buffers: 12524 kB' 'Cached: 10513664 kB' 'SwapCached: 0 kB' 'Active: 7538196 kB' 'Inactive: 3457800 kB' 'Active(anon): 7144836 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473308 kB' 'Mapped: 220316 kB' 'Shmem: 6675028 kB' 'KReclaimable: 272908 kB' 'Slab: 887216 kB' 'SReclaimable: 272908 kB' 'SUnreclaim: 614308 kB' 'KernelStack: 22016 kB' 'PageTables: 9052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8579200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216108 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.082 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.083 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43761172 kB' 'MemAvailable: 47251648 kB' 'Buffers: 12524 kB' 'Cached: 10513668 kB' 'SwapCached: 0 kB' 'Active: 7537788 kB' 'Inactive: 3457800 kB' 'Active(anon): 7144428 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472816 kB' 'Mapped: 220296 kB' 'Shmem: 6675032 kB' 'KReclaimable: 272908 kB' 'Slab: 887296 kB' 'SReclaimable: 272908 kB' 'SUnreclaim: 614388 kB' 'KernelStack: 22016 kB' 'PageTables: 9300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8578852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216012 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.084 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43760540 kB' 'MemAvailable: 47251016 kB' 'Buffers: 12524 kB' 'Cached: 10513684 kB' 'SwapCached: 0 kB' 'Active: 7537916 kB' 'Inactive: 3457800 kB' 'Active(anon): 7144556 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472856 kB' 'Mapped: 220296 kB' 'Shmem: 6675048 kB' 'KReclaimable: 272908 kB' 'Slab: 887224 kB' 'SReclaimable: 272908 kB' 'SUnreclaim: 614316 kB' 'KernelStack: 21968 kB' 'PageTables: 8732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8580608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216108 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:31.088 nr_hugepages=1024 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:31.088 resv_hugepages=0 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:31.088 surplus_hugepages=0 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:31.088 anon_hugepages=0 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43760024 kB' 'MemAvailable: 47250500 kB' 'Buffers: 12524 kB' 'Cached: 10513704 kB' 'SwapCached: 0 kB' 'Active: 7537744 kB' 'Inactive: 3457800 kB' 'Active(anon): 7144384 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472612 kB' 'Mapped: 220296 kB' 'Shmem: 6675068 kB' 'KReclaimable: 272908 kB' 'Slab: 887224 kB' 'SReclaimable: 272908 kB' 'SUnreclaim: 614316 kB' 'KernelStack: 21920 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8580632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216108 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26052380 kB' 'MemUsed: 6586760 kB' 'SwapCached: 0 kB' 'Active: 2412532 kB' 'Inactive: 134628 kB' 'Active(anon): 2280636 kB' 'Inactive(anon): 0 kB' 'Active(file): 131896 kB' 'Inactive(file): 134628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2294784 kB' 'Mapped: 120740 kB' 'AnonPages: 255620 kB' 'Shmem: 2028260 kB' 'KernelStack: 12776 kB' 'PageTables: 5668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121056 kB' 'Slab: 415188 kB' 'SReclaimable: 121056 kB' 'SUnreclaim: 294132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:31.092 node0=1024 expecting 1024 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:31.092 00:04:31.092 real 0m6.297s 00:04:31.092 user 0m1.668s 00:04:31.092 sys 0m2.924s 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.092 03:59:39 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:31.092 ************************************ 00:04:31.092 END TEST default_setup 00:04:31.092 ************************************ 00:04:31.092 03:59:39 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:31.092 03:59:39 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:31.092 03:59:39 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.092 03:59:39 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.092 03:59:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:31.092 ************************************ 00:04:31.092 START TEST per_node_1G_alloc 00:04:31.092 ************************************ 00:04:31.352 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:31.352 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:31.352 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:31.352 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:31.352 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:31.352 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:31.352 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:31.352 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:31.352 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:31.352 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:31.353 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:31.353 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:31.353 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:31.353 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:31.353 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:31.353 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:31.353 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:31.353 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:31.353 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:31.353 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:31.353 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:31.353 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:31.353 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:31.353 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:31.353 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:31.353 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:31.353 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.353 03:59:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:35.557 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:35.557 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:35.557 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:35.557 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:35.557 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:35.557 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:35.557 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:35.557 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:35.557 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:35.557 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:35.557 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:35.557 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:35.557 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:35.557 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:35.557 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:35.557 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:35.557 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43776960 kB' 'MemAvailable: 47267436 kB' 'Buffers: 12524 kB' 'Cached: 10513824 kB' 'SwapCached: 0 kB' 'Active: 7536536 kB' 'Inactive: 3457800 kB' 'Active(anon): 7143176 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471232 kB' 'Mapped: 219224 kB' 'Shmem: 6675188 kB' 'KReclaimable: 272908 kB' 'Slab: 886764 kB' 'SReclaimable: 272908 kB' 'SUnreclaim: 613856 kB' 'KernelStack: 21824 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8568864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216156 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.557 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.558 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43778384 kB' 'MemAvailable: 47268860 kB' 'Buffers: 12524 kB' 'Cached: 10513828 kB' 'SwapCached: 0 kB' 'Active: 7536244 kB' 'Inactive: 3457800 kB' 'Active(anon): 7142884 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470920 kB' 'Mapped: 219152 kB' 'Shmem: 6675192 kB' 'KReclaimable: 272908 kB' 'Slab: 886808 kB' 'SReclaimable: 272908 kB' 'SUnreclaim: 613900 kB' 'KernelStack: 21808 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8568884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216124 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.559 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.560 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43778948 kB' 'MemAvailable: 47269424 kB' 'Buffers: 12524 kB' 'Cached: 10513844 kB' 'SwapCached: 0 kB' 'Active: 7536264 kB' 'Inactive: 3457800 kB' 'Active(anon): 7142904 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470932 kB' 'Mapped: 219152 kB' 'Shmem: 6675208 kB' 'KReclaimable: 272908 kB' 'Slab: 886808 kB' 'SReclaimable: 272908 kB' 'SUnreclaim: 613900 kB' 'KernelStack: 21808 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8568904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216124 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.561 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.562 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:35.563 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:35.563 nr_hugepages=1024 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.564 resv_hugepages=0 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.564 surplus_hugepages=0 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:35.564 anon_hugepages=0 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43778444 kB' 'MemAvailable: 47268920 kB' 'Buffers: 12524 kB' 'Cached: 10513868 kB' 'SwapCached: 0 kB' 'Active: 7536144 kB' 'Inactive: 3457800 kB' 'Active(anon): 7142784 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470760 kB' 'Mapped: 219152 kB' 'Shmem: 6675232 kB' 'KReclaimable: 272908 kB' 'Slab: 886808 kB' 'SReclaimable: 272908 kB' 'SUnreclaim: 613900 kB' 'KernelStack: 21792 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8568928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216124 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.564 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.565 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 27105788 kB' 'MemUsed: 5533352 kB' 'SwapCached: 0 kB' 'Active: 2412772 kB' 'Inactive: 134628 kB' 'Active(anon): 2280876 kB' 'Inactive(anon): 0 kB' 'Active(file): 131896 kB' 'Inactive(file): 134628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2294976 kB' 'Mapped: 119968 kB' 'AnonPages: 255628 kB' 'Shmem: 2028452 kB' 'KernelStack: 12696 kB' 'PageTables: 5500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121056 kB' 'Slab: 414912 kB' 'SReclaimable: 121056 kB' 'SUnreclaim: 293856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.566 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.567 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 16672464 kB' 'MemUsed: 10983616 kB' 'SwapCached: 0 kB' 'Active: 5123476 kB' 'Inactive: 3323172 kB' 'Active(anon): 4862012 kB' 'Inactive(anon): 0 kB' 'Active(file): 261464 kB' 'Inactive(file): 3323172 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8231456 kB' 'Mapped: 99184 kB' 'AnonPages: 215228 kB' 'Shmem: 4646820 kB' 'KernelStack: 9112 kB' 'PageTables: 2948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 151852 kB' 'Slab: 471896 kB' 'SReclaimable: 151852 kB' 'SUnreclaim: 320044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.568 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:35.569 node0=512 expecting 512 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:35.569 node1=512 expecting 512 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:35.569 00:04:35.569 real 0m4.401s 00:04:35.569 user 0m1.654s 00:04:35.569 sys 0m2.827s 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.569 03:59:44 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:35.569 ************************************ 00:04:35.569 END TEST per_node_1G_alloc 00:04:35.569 ************************************ 00:04:35.569 03:59:44 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:35.569 03:59:44 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:35.569 03:59:44 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.569 03:59:44 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.569 03:59:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:35.829 ************************************ 00:04:35.829 START TEST even_2G_alloc 00:04:35.829 ************************************ 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.829 03:59:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:39.118 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:39.118 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:39.118 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:39.118 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:39.118 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:39.118 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:39.118 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:39.118 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:39.118 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:39.118 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:39.118 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:39.118 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:39.118 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:39.118 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:39.118 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:39.118 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:39.118 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:39.118 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:39.118 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:39.118 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.118 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.118 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:39.118 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:39.118 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:39.118 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.118 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.118 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.118 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.118 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.118 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.118 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.118 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.118 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.118 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.118 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.118 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.118 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.119 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43766316 kB' 'MemAvailable: 47256792 kB' 'Buffers: 12524 kB' 'Cached: 10513988 kB' 'SwapCached: 0 kB' 'Active: 7537332 kB' 'Inactive: 3457800 kB' 'Active(anon): 7143972 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471752 kB' 'Mapped: 219280 kB' 'Shmem: 6675352 kB' 'KReclaimable: 272908 kB' 'Slab: 887104 kB' 'SReclaimable: 272908 kB' 'SUnreclaim: 614196 kB' 'KernelStack: 21712 kB' 'PageTables: 7796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8569300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216220 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:39.119 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.119 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.119 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.119 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.119 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.119 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.383 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.384 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43767320 kB' 'MemAvailable: 47257796 kB' 'Buffers: 12524 kB' 'Cached: 10513992 kB' 'SwapCached: 0 kB' 'Active: 7537032 kB' 'Inactive: 3457800 kB' 'Active(anon): 7143672 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471644 kB' 'Mapped: 219172 kB' 'Shmem: 6675356 kB' 'KReclaimable: 272908 kB' 'Slab: 887132 kB' 'SReclaimable: 272908 kB' 'SUnreclaim: 614224 kB' 'KernelStack: 21824 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8572060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216140 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.385 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.386 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43767348 kB' 'MemAvailable: 47257824 kB' 'Buffers: 12524 kB' 'Cached: 10513992 kB' 'SwapCached: 0 kB' 'Active: 7536720 kB' 'Inactive: 3457800 kB' 'Active(anon): 7143360 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471360 kB' 'Mapped: 219172 kB' 'Shmem: 6675356 kB' 'KReclaimable: 272908 kB' 'Slab: 887132 kB' 'SReclaimable: 272908 kB' 'SUnreclaim: 614224 kB' 'KernelStack: 21760 kB' 'PageTables: 8200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8569344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216108 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.387 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:39.388 nr_hugepages=1024 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.388 resv_hugepages=0 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.388 surplus_hugepages=0 00:04:39.388 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.388 anon_hugepages=0 00:04:39.389 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.389 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:39.389 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.389 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.389 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.389 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.389 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.389 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.389 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.389 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.389 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.389 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.389 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.389 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43767348 kB' 'MemAvailable: 47257824 kB' 'Buffers: 12524 kB' 'Cached: 10514032 kB' 'SwapCached: 0 kB' 'Active: 7536952 kB' 'Inactive: 3457800 kB' 'Active(anon): 7143592 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471000 kB' 'Mapped: 219172 kB' 'Shmem: 6675396 kB' 'KReclaimable: 272908 kB' 'Slab: 887132 kB' 'SReclaimable: 272908 kB' 'SUnreclaim: 614224 kB' 'KernelStack: 21760 kB' 'PageTables: 8196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8569500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216108 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:39.389 03:59:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.389 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.390 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 27092568 kB' 'MemUsed: 5546572 kB' 'SwapCached: 0 kB' 'Active: 2411116 kB' 'Inactive: 134628 kB' 'Active(anon): 2279220 kB' 'Inactive(anon): 0 kB' 'Active(file): 131896 kB' 'Inactive(file): 134628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2295104 kB' 'Mapped: 119988 kB' 'AnonPages: 253780 kB' 'Shmem: 2028580 kB' 'KernelStack: 12680 kB' 'PageTables: 5404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121056 kB' 'Slab: 415348 kB' 'SReclaimable: 121056 kB' 'SUnreclaim: 294292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.391 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 16673880 kB' 'MemUsed: 10982200 kB' 'SwapCached: 0 kB' 'Active: 5125628 kB' 'Inactive: 3323172 kB' 'Active(anon): 4864164 kB' 'Inactive(anon): 0 kB' 'Active(file): 261464 kB' 'Inactive(file): 3323172 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8231504 kB' 'Mapped: 99184 kB' 'AnonPages: 217464 kB' 'Shmem: 4646868 kB' 'KernelStack: 9128 kB' 'PageTables: 3008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 151852 kB' 'Slab: 471784 kB' 'SReclaimable: 151852 kB' 'SUnreclaim: 319932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.392 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:39.393 node0=512 expecting 512 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.393 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.394 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.394 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:39.394 node1=512 expecting 512 00:04:39.394 03:59:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:39.394 00:04:39.394 real 0m3.758s 00:04:39.394 user 0m1.178s 00:04:39.394 sys 0m2.386s 00:04:39.394 03:59:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.394 03:59:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:39.394 ************************************ 00:04:39.394 END TEST even_2G_alloc 00:04:39.394 ************************************ 00:04:39.394 03:59:48 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:39.394 03:59:48 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:39.394 03:59:48 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.394 03:59:48 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.394 03:59:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:39.653 ************************************ 00:04:39.653 START TEST odd_alloc 00:04:39.653 ************************************ 00:04:39.653 03:59:48 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:39.653 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:39.653 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:39.653 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:39.653 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:39.653 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:39.654 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:39.654 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:39.654 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.654 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:39.654 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:39.654 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.654 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.654 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:39.654 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:39.654 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.654 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:39.654 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:39.654 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:39.654 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.654 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:39.654 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:39.654 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:39.654 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.654 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:39.654 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:39.654 03:59:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:39.654 03:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.654 03:59:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:43.857 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:43.857 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:43.857 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:43.857 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:43.857 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:43.857 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:43.857 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:43.857 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:43.857 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:43.857 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:43.857 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:43.857 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:43.857 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:43.857 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:43.857 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:43.857 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:43.857 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:43.857 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:43.857 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:43.857 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.857 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.857 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:43.857 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:43.857 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:43.857 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:43.857 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.857 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.857 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:43.857 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:43.857 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.857 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.857 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.857 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.857 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.857 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.857 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43758592 kB' 'MemAvailable: 47249052 kB' 'Buffers: 12524 kB' 'Cached: 10514164 kB' 'SwapCached: 0 kB' 'Active: 7538552 kB' 'Inactive: 3457800 kB' 'Active(anon): 7145192 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472516 kB' 'Mapped: 219276 kB' 'Shmem: 6675528 kB' 'KReclaimable: 272876 kB' 'Slab: 887180 kB' 'SReclaimable: 272876 kB' 'SUnreclaim: 614304 kB' 'KernelStack: 21792 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8571732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216284 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43758848 kB' 'MemAvailable: 47249292 kB' 'Buffers: 12524 kB' 'Cached: 10514164 kB' 'SwapCached: 0 kB' 'Active: 7539444 kB' 'Inactive: 3457800 kB' 'Active(anon): 7146084 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473380 kB' 'Mapped: 219264 kB' 'Shmem: 6675528 kB' 'KReclaimable: 272844 kB' 'Slab: 887148 kB' 'SReclaimable: 272844 kB' 'SUnreclaim: 614304 kB' 'KernelStack: 21936 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8571752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216236 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.860 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43758600 kB' 'MemAvailable: 47249044 kB' 'Buffers: 12524 kB' 'Cached: 10514168 kB' 'SwapCached: 0 kB' 'Active: 7538768 kB' 'Inactive: 3457800 kB' 'Active(anon): 7145408 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473160 kB' 'Mapped: 219188 kB' 'Shmem: 6675532 kB' 'KReclaimable: 272844 kB' 'Slab: 887180 kB' 'SReclaimable: 272844 kB' 'SUnreclaim: 614336 kB' 'KernelStack: 21968 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8573388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216252 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.861 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.862 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:43.863 nr_hugepages=1025 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.863 resv_hugepages=0 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.863 surplus_hugepages=0 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:43.863 anon_hugepages=0 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43758492 kB' 'MemAvailable: 47248936 kB' 'Buffers: 12524 kB' 'Cached: 10514204 kB' 'SwapCached: 0 kB' 'Active: 7538476 kB' 'Inactive: 3457800 kB' 'Active(anon): 7145116 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472832 kB' 'Mapped: 219196 kB' 'Shmem: 6675568 kB' 'KReclaimable: 272844 kB' 'Slab: 887180 kB' 'SReclaimable: 272844 kB' 'SUnreclaim: 614336 kB' 'KernelStack: 21968 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8573408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216252 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.863 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.864 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 27082816 kB' 'MemUsed: 5556324 kB' 'SwapCached: 0 kB' 'Active: 2412228 kB' 'Inactive: 134628 kB' 'Active(anon): 2280332 kB' 'Inactive(anon): 0 kB' 'Active(file): 131896 kB' 'Inactive(file): 134628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2295216 kB' 'Mapped: 120004 kB' 'AnonPages: 254784 kB' 'Shmem: 2028692 kB' 'KernelStack: 12696 kB' 'PageTables: 5452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120992 kB' 'Slab: 415396 kB' 'SReclaimable: 120992 kB' 'SUnreclaim: 294404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.865 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 16674080 kB' 'MemUsed: 10982000 kB' 'SwapCached: 0 kB' 'Active: 5126308 kB' 'Inactive: 3323172 kB' 'Active(anon): 4864844 kB' 'Inactive(anon): 0 kB' 'Active(file): 261464 kB' 'Inactive(file): 3323172 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8231532 kB' 'Mapped: 99184 kB' 'AnonPages: 218056 kB' 'Shmem: 4646896 kB' 'KernelStack: 9256 kB' 'PageTables: 3192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 151852 kB' 'Slab: 471784 kB' 'SReclaimable: 151852 kB' 'SUnreclaim: 319932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.866 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.867 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:43.868 node0=512 expecting 513 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:43.868 node1=513 expecting 512 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:43.868 00:04:43.868 real 0m4.070s 00:04:43.868 user 0m1.575s 00:04:43.868 sys 0m2.570s 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.868 03:59:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:43.868 ************************************ 00:04:43.868 END TEST odd_alloc 00:04:43.868 ************************************ 00:04:43.868 03:59:52 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:43.868 03:59:52 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:43.868 03:59:52 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.868 03:59:52 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.868 03:59:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:43.868 ************************************ 00:04:43.868 START TEST custom_alloc 00:04:43.868 ************************************ 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:43.868 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.869 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:43.869 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:43.869 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.869 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.869 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:43.869 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:43.869 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:43.869 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:43.869 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:43.869 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:43.869 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:43.869 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:43.869 03:59:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:43.869 03:59:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.869 03:59:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:47.170 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:47.170 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:47.170 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:47.170 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:47.170 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:47.170 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:47.170 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:47.170 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:47.170 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:47.170 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:47.170 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:47.170 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:47.170 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:47.170 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:47.170 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:47.170 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:47.170 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42690280 kB' 'MemAvailable: 46180724 kB' 'Buffers: 12524 kB' 'Cached: 10514320 kB' 'SwapCached: 0 kB' 'Active: 7539036 kB' 'Inactive: 3457800 kB' 'Active(anon): 7145676 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473252 kB' 'Mapped: 219228 kB' 'Shmem: 6675684 kB' 'KReclaimable: 272844 kB' 'Slab: 887340 kB' 'SReclaimable: 272844 kB' 'SUnreclaim: 614496 kB' 'KernelStack: 21936 kB' 'PageTables: 8608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8574024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216428 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.170 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.171 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42689456 kB' 'MemAvailable: 46179900 kB' 'Buffers: 12524 kB' 'Cached: 10514320 kB' 'SwapCached: 0 kB' 'Active: 7539652 kB' 'Inactive: 3457800 kB' 'Active(anon): 7146292 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473888 kB' 'Mapped: 219208 kB' 'Shmem: 6675684 kB' 'KReclaimable: 272844 kB' 'Slab: 887332 kB' 'SReclaimable: 272844 kB' 'SUnreclaim: 614488 kB' 'KernelStack: 22048 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8574044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216332 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.172 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.173 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42690428 kB' 'MemAvailable: 46180872 kB' 'Buffers: 12524 kB' 'Cached: 10514356 kB' 'SwapCached: 0 kB' 'Active: 7538708 kB' 'Inactive: 3457800 kB' 'Active(anon): 7145348 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472932 kB' 'Mapped: 219208 kB' 'Shmem: 6675720 kB' 'KReclaimable: 272844 kB' 'Slab: 887348 kB' 'SReclaimable: 272844 kB' 'SUnreclaim: 614504 kB' 'KernelStack: 21776 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8571200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216252 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.174 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.175 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:47.176 nr_hugepages=1536 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:47.176 resv_hugepages=0 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:47.176 surplus_hugepages=0 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:47.176 anon_hugepages=0 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42690932 kB' 'MemAvailable: 46181376 kB' 'Buffers: 12524 kB' 'Cached: 10514356 kB' 'SwapCached: 0 kB' 'Active: 7538652 kB' 'Inactive: 3457800 kB' 'Active(anon): 7145292 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472844 kB' 'Mapped: 219208 kB' 'Shmem: 6675720 kB' 'KReclaimable: 272844 kB' 'Slab: 887508 kB' 'SReclaimable: 272844 kB' 'SUnreclaim: 614664 kB' 'KernelStack: 21840 kB' 'PageTables: 8440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8571220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216252 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.176 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.177 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 27067500 kB' 'MemUsed: 5571640 kB' 'SwapCached: 0 kB' 'Active: 2411784 kB' 'Inactive: 134628 kB' 'Active(anon): 2279888 kB' 'Inactive(anon): 0 kB' 'Active(file): 131896 kB' 'Inactive(file): 134628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2295340 kB' 'Mapped: 120020 kB' 'AnonPages: 254260 kB' 'Shmem: 2028816 kB' 'KernelStack: 12728 kB' 'PageTables: 5504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120992 kB' 'Slab: 415360 kB' 'SReclaimable: 120992 kB' 'SUnreclaim: 294368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.178 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 15624372 kB' 'MemUsed: 12031708 kB' 'SwapCached: 0 kB' 'Active: 5126856 kB' 'Inactive: 3323172 kB' 'Active(anon): 4865392 kB' 'Inactive(anon): 0 kB' 'Active(file): 261464 kB' 'Inactive(file): 3323172 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8231568 kB' 'Mapped: 99188 kB' 'AnonPages: 218560 kB' 'Shmem: 4646932 kB' 'KernelStack: 9128 kB' 'PageTables: 2988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 151852 kB' 'Slab: 472148 kB' 'SReclaimable: 151852 kB' 'SUnreclaim: 320296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.179 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.180 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.181 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.181 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.181 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.181 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.181 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.181 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.181 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.181 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.181 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.181 03:59:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:47.181 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.181 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.181 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.181 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.181 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:47.181 node0=512 expecting 512 00:04:47.181 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.181 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.181 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.181 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:47.181 node1=1024 expecting 1024 00:04:47.181 03:59:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:47.181 00:04:47.181 real 0m3.560s 00:04:47.181 user 0m1.154s 00:04:47.181 sys 0m2.254s 00:04:47.181 03:59:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.181 03:59:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:47.181 ************************************ 00:04:47.181 END TEST custom_alloc 00:04:47.181 ************************************ 00:04:47.181 03:59:55 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:47.181 03:59:55 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:47.181 03:59:55 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.181 03:59:55 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.181 03:59:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:47.472 ************************************ 00:04:47.472 START TEST no_shrink_alloc 00:04:47.472 ************************************ 00:04:47.472 03:59:55 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:47.472 03:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:47.472 03:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:47.472 03:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:47.472 03:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:47.472 03:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:47.472 03:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:47.472 03:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:47.472 03:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:47.472 03:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:47.472 03:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:47.472 03:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.472 03:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:47.472 03:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:47.472 03:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.472 03:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.472 03:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:47.472 03:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:47.472 03:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:47.472 03:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:47.472 03:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:47.472 03:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.472 03:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:51.672 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:51.672 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:51.672 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:51.672 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:51.672 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:51.672 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:51.672 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:51.672 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:51.672 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:51.672 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:51.672 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:51.672 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:51.672 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:51.672 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:51.672 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:51.672 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:51.672 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43711624 kB' 'MemAvailable: 47202068 kB' 'Buffers: 12524 kB' 'Cached: 10514480 kB' 'SwapCached: 0 kB' 'Active: 7539828 kB' 'Inactive: 3457800 kB' 'Active(anon): 7146468 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474456 kB' 'Mapped: 219224 kB' 'Shmem: 6675844 kB' 'KReclaimable: 272844 kB' 'Slab: 887152 kB' 'SReclaimable: 272844 kB' 'SUnreclaim: 614308 kB' 'KernelStack: 21792 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8574332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216220 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.672 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:51.673 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43713096 kB' 'MemAvailable: 47203540 kB' 'Buffers: 12524 kB' 'Cached: 10514480 kB' 'SwapCached: 0 kB' 'Active: 7540628 kB' 'Inactive: 3457800 kB' 'Active(anon): 7147268 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474712 kB' 'Mapped: 219296 kB' 'Shmem: 6675844 kB' 'KReclaimable: 272844 kB' 'Slab: 887200 kB' 'SReclaimable: 272844 kB' 'SUnreclaim: 614356 kB' 'KernelStack: 21936 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8574476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216268 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.674 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.675 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43713480 kB' 'MemAvailable: 47203924 kB' 'Buffers: 12524 kB' 'Cached: 10514480 kB' 'SwapCached: 0 kB' 'Active: 7540000 kB' 'Inactive: 3457800 kB' 'Active(anon): 7146640 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474048 kB' 'Mapped: 219220 kB' 'Shmem: 6675844 kB' 'KReclaimable: 272844 kB' 'Slab: 887140 kB' 'SReclaimable: 272844 kB' 'SUnreclaim: 614296 kB' 'KernelStack: 22064 kB' 'PageTables: 9172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8574504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216332 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.676 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.677 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:51.678 nr_hugepages=1024 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:51.678 resv_hugepages=0 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:51.678 surplus_hugepages=0 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:51.678 anon_hugepages=0 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43712772 kB' 'MemAvailable: 47203216 kB' 'Buffers: 12524 kB' 'Cached: 10514488 kB' 'SwapCached: 0 kB' 'Active: 7540036 kB' 'Inactive: 3457800 kB' 'Active(anon): 7146676 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474080 kB' 'Mapped: 219220 kB' 'Shmem: 6675852 kB' 'KReclaimable: 272844 kB' 'Slab: 887140 kB' 'SReclaimable: 272844 kB' 'SUnreclaim: 614296 kB' 'KernelStack: 21920 kB' 'PageTables: 8788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8574524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216300 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.678 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.679 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26032856 kB' 'MemUsed: 6606284 kB' 'SwapCached: 0 kB' 'Active: 2414244 kB' 'Inactive: 134628 kB' 'Active(anon): 2282348 kB' 'Inactive(anon): 0 kB' 'Active(file): 131896 kB' 'Inactive(file): 134628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2295420 kB' 'Mapped: 120032 kB' 'AnonPages: 256696 kB' 'Shmem: 2028896 kB' 'KernelStack: 12840 kB' 'PageTables: 6124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120992 kB' 'Slab: 415008 kB' 'SReclaimable: 120992 kB' 'SUnreclaim: 294016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.681 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:51.682 node0=1024 expecting 1024 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.682 03:59:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:55.875 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:55.875 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:55.875 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:55.875 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:55.875 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:55.875 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:55.875 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:55.875 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:55.875 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:55.875 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:55.875 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:55.875 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:55.875 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:55.875 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:55.875 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:55.875 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:55.875 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:55.875 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43679256 kB' 'MemAvailable: 47169700 kB' 'Buffers: 12524 kB' 'Cached: 10514636 kB' 'SwapCached: 0 kB' 'Active: 7545480 kB' 'Inactive: 3457800 kB' 'Active(anon): 7152120 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479020 kB' 'Mapped: 219324 kB' 'Shmem: 6676000 kB' 'KReclaimable: 272844 kB' 'Slab: 887352 kB' 'SReclaimable: 272844 kB' 'SUnreclaim: 614508 kB' 'KernelStack: 22016 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8576092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216268 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.875 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.876 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43679508 kB' 'MemAvailable: 47169952 kB' 'Buffers: 12524 kB' 'Cached: 10514640 kB' 'SwapCached: 0 kB' 'Active: 7545464 kB' 'Inactive: 3457800 kB' 'Active(anon): 7152104 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479032 kB' 'Mapped: 219308 kB' 'Shmem: 6676004 kB' 'KReclaimable: 272844 kB' 'Slab: 887348 kB' 'SReclaimable: 272844 kB' 'SUnreclaim: 614504 kB' 'KernelStack: 22016 kB' 'PageTables: 8964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8577492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216284 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.877 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.878 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43680012 kB' 'MemAvailable: 47170456 kB' 'Buffers: 12524 kB' 'Cached: 10514660 kB' 'SwapCached: 0 kB' 'Active: 7544356 kB' 'Inactive: 3457800 kB' 'Active(anon): 7150996 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478300 kB' 'Mapped: 219308 kB' 'Shmem: 6676024 kB' 'KReclaimable: 272844 kB' 'Slab: 887336 kB' 'SReclaimable: 272844 kB' 'SUnreclaim: 614492 kB' 'KernelStack: 21952 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8577672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216284 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.879 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.880 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:55.881 nr_hugepages=1024 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.881 resv_hugepages=0 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.881 surplus_hugepages=0 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.881 anon_hugepages=0 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43680016 kB' 'MemAvailable: 47170460 kB' 'Buffers: 12524 kB' 'Cached: 10514680 kB' 'SwapCached: 0 kB' 'Active: 7544908 kB' 'Inactive: 3457800 kB' 'Active(anon): 7151548 kB' 'Inactive(anon): 0 kB' 'Active(file): 393360 kB' 'Inactive(file): 3457800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478464 kB' 'Mapped: 219308 kB' 'Shmem: 6676044 kB' 'KReclaimable: 272844 kB' 'Slab: 887336 kB' 'SReclaimable: 272844 kB' 'SUnreclaim: 614492 kB' 'KernelStack: 22016 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8577536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216300 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3126644 kB' 'DirectMap2M: 18579456 kB' 'DirectMap1G: 47185920 kB' 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.881 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26008020 kB' 'MemUsed: 6631120 kB' 'SwapCached: 0 kB' 'Active: 2416140 kB' 'Inactive: 134628 kB' 'Active(anon): 2284244 kB' 'Inactive(anon): 0 kB' 'Active(file): 131896 kB' 'Inactive(file): 134628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2295620 kB' 'Mapped: 120044 kB' 'AnonPages: 258276 kB' 'Shmem: 2029096 kB' 'KernelStack: 12776 kB' 'PageTables: 5588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120992 kB' 'Slab: 415172 kB' 'SReclaimable: 120992 kB' 'SUnreclaim: 294180 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:55.884 node0=1024 expecting 1024 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:55.884 00:04:55.884 real 0m8.371s 00:04:55.884 user 0m3.040s 00:04:55.884 sys 0m5.449s 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.884 04:00:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:55.884 ************************************ 00:04:55.884 END TEST no_shrink_alloc 00:04:55.884 ************************************ 00:04:55.884 04:00:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:55.884 04:00:04 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:55.884 04:00:04 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:55.884 04:00:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:55.884 04:00:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:55.884 04:00:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:55.884 04:00:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:55.884 04:00:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:55.884 04:00:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:55.884 04:00:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:55.884 04:00:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:55.884 04:00:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:55.884 04:00:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:55.884 04:00:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:55.884 04:00:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:55.884 00:04:55.884 real 0m31.097s 00:04:55.884 user 0m10.502s 00:04:55.884 sys 0m18.868s 00:04:55.884 04:00:04 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.884 04:00:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:55.884 ************************************ 00:04:55.884 END TEST hugepages 00:04:55.884 ************************************ 00:04:55.884 04:00:04 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:55.884 04:00:04 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:04:55.884 04:00:04 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.884 04:00:04 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.884 04:00:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:55.884 ************************************ 00:04:55.884 START TEST driver 00:04:55.884 ************************************ 00:04:55.884 04:00:04 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:04:55.884 * Looking for test storage... 00:04:55.884 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:55.884 04:00:04 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:55.884 04:00:04 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:55.885 04:00:04 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:02.452 04:00:09 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:02.452 04:00:09 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.452 04:00:09 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.452 04:00:09 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:02.452 ************************************ 00:05:02.452 START TEST guess_driver 00:05:02.452 ************************************ 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 256 > 0 )) 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:02.452 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:02.452 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:02.452 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:02.452 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:02.452 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:02.452 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:02.452 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:02.452 Looking for driver=vfio-pci 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.452 04:00:10 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:05.742 04:00:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.742 04:00:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.742 04:00:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.742 04:00:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.742 04:00:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.742 04:00:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.742 04:00:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.742 04:00:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.742 04:00:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.742 04:00:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.742 04:00:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.742 04:00:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.742 04:00:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:07.655 04:00:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:07.655 04:00:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:07.655 04:00:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:07.655 04:00:16 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:07.655 04:00:16 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:07.655 04:00:16 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:07.655 04:00:16 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:12.928 00:05:12.928 real 0m11.589s 00:05:12.928 user 0m2.972s 00:05:12.928 sys 0m5.969s 00:05:12.928 04:00:21 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.928 04:00:21 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:12.928 ************************************ 00:05:12.928 END TEST guess_driver 00:05:12.928 ************************************ 00:05:12.928 04:00:21 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:12.928 00:05:12.928 real 0m17.196s 00:05:12.928 user 0m4.537s 00:05:12.928 sys 0m9.157s 00:05:12.928 04:00:21 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.928 04:00:21 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:12.928 ************************************ 00:05:12.928 END TEST driver 00:05:12.928 ************************************ 00:05:12.928 04:00:21 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:12.928 04:00:21 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:05:12.928 04:00:21 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.928 04:00:21 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.928 04:00:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:13.187 ************************************ 00:05:13.187 START TEST devices 00:05:13.187 ************************************ 00:05:13.187 04:00:21 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:05:13.187 * Looking for test storage... 00:05:13.187 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:05:13.187 04:00:21 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:13.187 04:00:21 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:13.187 04:00:21 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:13.187 04:00:21 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:18.465 04:00:26 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:18.465 04:00:26 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:18.465 04:00:26 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:18.465 04:00:26 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:18.465 04:00:26 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:18.465 04:00:26 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:18.465 04:00:26 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:18.465 04:00:26 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:18.465 04:00:26 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:18.465 04:00:26 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:18.465 04:00:26 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:18.465 04:00:26 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:18.465 04:00:26 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:18.465 04:00:26 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:18.465 04:00:26 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:18.465 04:00:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:18.465 04:00:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:18.465 04:00:26 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:05:18.465 04:00:26 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:05:18.465 04:00:26 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:18.465 04:00:26 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:18.465 04:00:26 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:18.465 No valid GPT data, bailing 00:05:18.465 04:00:26 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:18.465 04:00:26 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:18.465 04:00:26 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:18.465 04:00:26 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:18.465 04:00:26 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:18.465 04:00:26 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:18.465 04:00:26 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:05:18.465 04:00:26 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:05:18.465 04:00:26 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:18.465 04:00:26 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:05:18.465 04:00:26 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:18.465 04:00:26 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:18.465 04:00:26 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:18.465 04:00:26 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.465 04:00:26 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.465 04:00:26 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:18.465 ************************************ 00:05:18.465 START TEST nvme_mount 00:05:18.465 ************************************ 00:05:18.465 04:00:26 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:18.465 04:00:26 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:18.465 04:00:26 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:18.465 04:00:26 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:18.465 04:00:26 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:18.465 04:00:26 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:18.465 04:00:26 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:18.465 04:00:26 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:18.465 04:00:26 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:18.465 04:00:26 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:18.465 04:00:26 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:18.465 04:00:26 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:18.465 04:00:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:18.465 04:00:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:18.465 04:00:26 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:18.465 04:00:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:18.465 04:00:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:18.465 04:00:26 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:18.465 04:00:26 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:18.465 04:00:26 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:18.725 Creating new GPT entries in memory. 00:05:18.725 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:18.725 other utilities. 00:05:18.725 04:00:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:18.725 04:00:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.725 04:00:27 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:18.725 04:00:27 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:18.725 04:00:27 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:20.106 Creating new GPT entries in memory. 00:05:20.106 The operation has completed successfully. 00:05:20.106 04:00:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:20.106 04:00:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.106 04:00:28 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2509320 00:05:20.106 04:00:28 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:20.106 04:00:28 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:20.106 04:00:28 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:20.106 04:00:28 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:20.106 04:00:28 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:20.106 04:00:28 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:20.106 04:00:28 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:20.106 04:00:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:20.106 04:00:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:20.106 04:00:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:20.106 04:00:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:20.106 04:00:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:20.106 04:00:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:20.106 04:00:28 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:20.106 04:00:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:20.106 04:00:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.106 04:00:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:20.106 04:00:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:20.106 04:00:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.106 04:00:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:24.345 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:24.345 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:24.345 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:24.345 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:24.345 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:24.345 04:00:32 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.345 04:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:24.345 04:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:24.345 04:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:24.345 04:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:24.346 04:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:24.346 04:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:24.346 04:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:24.346 04:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:24.346 04:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:24.346 04:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.346 04:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:24.346 04:00:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:24.346 04:00:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.346 04:00:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.538 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:28.539 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:28.539 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:28.539 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:28.539 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:28.539 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:28.539 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:05:28.539 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:28.539 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:28.539 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:28.539 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:28.539 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:28.539 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:28.539 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:28.539 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.539 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:28.539 04:00:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:28.539 04:00:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.539 04:00:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:32.735 04:00:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.735 04:00:41 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:32.735 04:00:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:32.735 04:00:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:32.735 04:00:41 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:32.735 04:00:41 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.735 04:00:41 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:32.735 04:00:41 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:32.735 04:00:41 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:32.735 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:32.735 00:05:32.735 real 0m14.636s 00:05:32.735 user 0m4.340s 00:05:32.735 sys 0m8.141s 00:05:32.735 04:00:41 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.735 04:00:41 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:32.735 ************************************ 00:05:32.735 END TEST nvme_mount 00:05:32.735 ************************************ 00:05:32.735 04:00:41 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:32.735 04:00:41 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:32.735 04:00:41 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.735 04:00:41 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.735 04:00:41 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:32.735 ************************************ 00:05:32.735 START TEST dm_mount 00:05:32.735 ************************************ 00:05:32.735 04:00:41 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:32.735 04:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:32.735 04:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:32.736 04:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:32.736 04:00:41 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:32.736 04:00:41 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:32.736 04:00:41 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:32.736 04:00:41 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:32.736 04:00:41 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:32.736 04:00:41 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:32.736 04:00:41 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:32.736 04:00:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:32.736 04:00:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:32.736 04:00:41 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:32.736 04:00:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:32.736 04:00:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:32.736 04:00:41 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:32.736 04:00:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:32.736 04:00:41 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:32.736 04:00:41 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:32.736 04:00:41 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:32.736 04:00:41 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:33.673 Creating new GPT entries in memory. 00:05:33.673 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:33.673 other utilities. 00:05:33.673 04:00:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:33.673 04:00:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:33.673 04:00:42 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:33.673 04:00:42 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:33.673 04:00:42 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:34.611 Creating new GPT entries in memory. 00:05:34.611 The operation has completed successfully. 00:05:34.611 04:00:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:34.611 04:00:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:34.611 04:00:43 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:34.611 04:00:43 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:34.611 04:00:43 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:35.548 The operation has completed successfully. 00:05:35.548 04:00:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:35.548 04:00:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:35.548 04:00:44 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2514723 00:05:35.548 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:35.548 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:35.548 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:35.548 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:35.548 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:35.548 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:35.548 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:35.548 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:35.548 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:35.548 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:05:35.548 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:05:35.548 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:05:35.548 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:05:35.548 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:35.548 04:00:44 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount size= 00:05:35.548 04:00:44 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:35.548 04:00:44 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:35.548 04:00:44 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:35.807 04:00:44 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:35.807 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:35.807 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:35.807 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:35.807 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:35.807 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:35.807 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:35.807 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:35.807 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:35.807 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:35.807 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.807 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:35.807 04:00:44 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:35.807 04:00:44 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:35.807 04:00:44 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.002 04:00:48 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:43.290 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.550 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:43.550 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:43.550 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:43.550 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.550 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:43.550 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:43.550 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:43.550 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:43.550 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:43.550 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:43.550 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:43.809 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:43.809 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:43.809 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:43.809 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:43.809 04:00:52 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:43.809 00:05:43.809 real 0m11.227s 00:05:43.809 user 0m2.815s 00:05:43.809 sys 0m5.465s 00:05:43.809 04:00:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.809 04:00:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:43.809 ************************************ 00:05:43.809 END TEST dm_mount 00:05:43.809 ************************************ 00:05:43.809 04:00:52 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:43.809 04:00:52 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:43.809 04:00:52 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:43.809 04:00:52 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:43.809 04:00:52 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:43.809 04:00:52 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:43.809 04:00:52 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:43.809 04:00:52 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:44.069 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:44.069 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:44.069 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:44.069 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:44.069 04:00:52 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:44.069 04:00:52 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:44.069 04:00:52 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:44.069 04:00:52 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:44.069 04:00:52 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:44.069 04:00:52 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:44.069 04:00:52 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:44.069 00:05:44.069 real 0m31.001s 00:05:44.069 user 0m8.938s 00:05:44.069 sys 0m16.882s 00:05:44.069 04:00:52 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.069 04:00:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:44.069 ************************************ 00:05:44.069 END TEST devices 00:05:44.069 ************************************ 00:05:44.069 04:00:52 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:44.069 00:05:44.069 real 1m48.734s 00:05:44.069 user 0m33.094s 00:05:44.069 sys 1m2.925s 00:05:44.069 04:00:52 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.069 04:00:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:44.069 ************************************ 00:05:44.069 END TEST setup.sh 00:05:44.069 ************************************ 00:05:44.069 04:00:52 -- common/autotest_common.sh@1142 -- # return 0 00:05:44.069 04:00:52 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:05:48.261 Hugepages 00:05:48.261 node hugesize free / total 00:05:48.261 node0 1048576kB 0 / 0 00:05:48.261 node0 2048kB 1024 / 1024 00:05:48.261 node1 1048576kB 0 / 0 00:05:48.261 node1 2048kB 1024 / 1024 00:05:48.261 00:05:48.261 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:48.261 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:48.261 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:48.261 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:48.261 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:48.261 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:48.261 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:48.261 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:48.261 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:48.261 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:48.261 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:48.261 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:48.261 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:48.261 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:48.261 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:48.261 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:48.261 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:48.261 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:48.261 04:00:56 -- spdk/autotest.sh@130 -- # uname -s 00:05:48.261 04:00:56 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:48.261 04:00:56 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:48.261 04:00:56 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:05:51.577 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:51.577 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:51.577 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:51.577 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:51.577 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:51.577 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:51.577 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:51.836 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:51.836 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:51.836 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:51.836 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:51.836 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:51.836 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:51.836 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:51.836 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:51.836 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:53.743 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:54.001 04:01:02 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:54.940 04:01:03 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:54.940 04:01:03 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:54.940 04:01:03 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:54.940 04:01:03 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:54.940 04:01:03 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:54.940 04:01:03 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:54.940 04:01:03 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:54.940 04:01:03 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:54.940 04:01:03 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:54.940 04:01:03 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:54.940 04:01:03 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:05:54.940 04:01:03 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:59.137 Waiting for block devices as requested 00:05:59.137 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:59.137 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:59.396 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:59.396 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:59.396 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:59.656 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:59.656 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:59.656 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:59.916 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:59.916 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:59.916 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:00.175 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:00.175 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:00.175 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:00.434 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:00.434 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:00.434 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:06:00.693 04:01:09 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:00.693 04:01:09 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:06:00.694 04:01:09 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:06:00.694 04:01:09 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:06:00.694 04:01:09 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:06:00.694 04:01:09 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:06:00.694 04:01:09 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:06:00.694 04:01:09 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:00.694 04:01:09 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:00.694 04:01:09 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:00.694 04:01:09 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:00.694 04:01:09 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:00.694 04:01:09 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:00.694 04:01:09 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:06:00.694 04:01:09 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:00.694 04:01:09 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:00.694 04:01:09 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:00.694 04:01:09 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:00.694 04:01:09 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:00.694 04:01:09 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:00.694 04:01:09 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:00.694 04:01:09 -- common/autotest_common.sh@1557 -- # continue 00:06:00.694 04:01:09 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:00.694 04:01:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:00.694 04:01:09 -- common/autotest_common.sh@10 -- # set +x 00:06:00.694 04:01:09 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:00.694 04:01:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:00.694 04:01:09 -- common/autotest_common.sh@10 -- # set +x 00:06:00.694 04:01:09 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:06:04.905 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:04.905 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:04.905 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:04.905 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:04.905 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:04.905 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:04.905 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:04.905 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:04.905 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:04.905 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:04.905 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:04.905 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:04.905 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:04.905 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:04.905 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:04.905 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:06.812 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:06:07.070 04:01:15 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:07.070 04:01:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:07.070 04:01:15 -- common/autotest_common.sh@10 -- # set +x 00:06:07.070 04:01:15 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:07.070 04:01:15 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:07.070 04:01:15 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:07.070 04:01:15 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:07.070 04:01:15 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:07.071 04:01:15 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:07.071 04:01:15 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:07.071 04:01:15 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:07.071 04:01:15 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:07.071 04:01:15 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:07.071 04:01:15 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:07.071 04:01:15 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:07.071 04:01:15 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:06:07.071 04:01:15 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:07.071 04:01:15 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:06:07.071 04:01:15 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:06:07.071 04:01:15 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:07.071 04:01:15 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:06:07.071 04:01:15 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:06:07.071 04:01:15 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:06:07.071 04:01:15 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2526032 00:06:07.071 04:01:15 -- common/autotest_common.sh@1598 -- # waitforlisten 2526032 00:06:07.071 04:01:15 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:07.071 04:01:15 -- common/autotest_common.sh@829 -- # '[' -z 2526032 ']' 00:06:07.071 04:01:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.071 04:01:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.071 04:01:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.071 04:01:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.071 04:01:15 -- common/autotest_common.sh@10 -- # set +x 00:06:07.329 [2024-07-23 04:01:15.946519] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:07.329 [2024-07-23 04:01:15.946640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526032 ] 00:06:07.588 [2024-07-23 04:01:16.149581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.847 [2024-07-23 04:01:16.430061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.224 04:01:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.224 04:01:17 -- common/autotest_common.sh@862 -- # return 0 00:06:09.224 04:01:17 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:06:09.224 04:01:17 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:06:09.224 04:01:17 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:06:12.514 nvme0n1 00:06:12.514 04:01:20 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:12.514 [2024-07-23 04:01:20.987217] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:06:12.514 request: 00:06:12.514 { 00:06:12.514 "nvme_ctrlr_name": "nvme0", 00:06:12.514 "password": "test", 00:06:12.514 "method": "bdev_nvme_opal_revert", 00:06:12.514 "req_id": 1 00:06:12.514 } 00:06:12.514 Got JSON-RPC error response 00:06:12.514 response: 00:06:12.514 { 00:06:12.514 "code": -32602, 00:06:12.514 "message": "Invalid parameters" 00:06:12.514 } 00:06:12.514 04:01:21 -- common/autotest_common.sh@1604 -- # true 00:06:12.514 04:01:21 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:06:12.514 04:01:21 -- common/autotest_common.sh@1608 -- # killprocess 2526032 00:06:12.514 04:01:21 -- common/autotest_common.sh@948 -- # '[' -z 2526032 ']' 00:06:12.514 04:01:21 -- common/autotest_common.sh@952 -- # kill -0 2526032 00:06:12.514 04:01:21 -- common/autotest_common.sh@953 -- # uname 00:06:12.514 04:01:21 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.514 04:01:21 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2526032 00:06:12.514 04:01:21 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.514 04:01:21 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.514 04:01:21 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2526032' 00:06:12.514 killing process with pid 2526032 00:06:12.514 04:01:21 -- common/autotest_common.sh@967 -- # kill 2526032 00:06:12.514 04:01:21 -- common/autotest_common.sh@972 -- # wait 2526032 00:06:17.830 04:01:26 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:17.830 04:01:26 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:17.830 04:01:26 -- spdk/autotest.sh@155 -- # [[ 1 -eq 1 ]] 00:06:17.830 04:01:26 -- spdk/autotest.sh@156 -- # [[ 0 -eq 1 ]] 00:06:17.830 04:01:26 -- spdk/autotest.sh@159 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/qat_setup.sh 00:06:18.399 Restarting all devices. 00:06:24.971 lstat() error: No such file or directory 00:06:24.971 QAT Error: No GENERAL section found 00:06:24.971 Failed to configure qat_dev0 00:06:24.971 lstat() error: No such file or directory 00:06:24.971 QAT Error: No GENERAL section found 00:06:24.971 Failed to configure qat_dev1 00:06:24.971 lstat() error: No such file or directory 00:06:24.971 QAT Error: No GENERAL section found 00:06:24.971 Failed to configure qat_dev2 00:06:24.971 lstat() error: No such file or directory 00:06:24.971 QAT Error: No GENERAL section found 00:06:24.971 Failed to configure qat_dev3 00:06:24.971 lstat() error: No such file or directory 00:06:24.971 QAT Error: No GENERAL section found 00:06:24.971 Failed to configure qat_dev4 00:06:24.971 enable sriov 00:06:24.971 Checking status of all devices. 00:06:24.971 There is 5 QAT acceleration device(s) in the system: 00:06:24.971 qat_dev0 - type: c6xx, inst_id: 0, node_id: 0, bsf: 0000:1a:00.0, #accel: 5 #engines: 10 state: down 00:06:24.971 qat_dev1 - type: c6xx, inst_id: 1, node_id: 0, bsf: 0000:1c:00.0, #accel: 5 #engines: 10 state: down 00:06:24.971 qat_dev2 - type: c6xx, inst_id: 2, node_id: 0, bsf: 0000:1e:00.0, #accel: 5 #engines: 10 state: down 00:06:24.971 qat_dev3 - type: c6xx, inst_id: 3, node_id: 0, bsf: 0000:3d:00.0, #accel: 5 #engines: 10 state: down 00:06:24.971 qat_dev4 - type: c6xx, inst_id: 4, node_id: 0, bsf: 0000:3f:00.0, #accel: 5 #engines: 10 state: down 00:06:24.971 0000:1a:00.0 set to 16 VFs 00:06:25.908 0000:1c:00.0 set to 16 VFs 00:06:26.477 0000:1e:00.0 set to 16 VFs 00:06:27.415 0000:3d:00.0 set to 16 VFs 00:06:27.983 0000:3f:00.0 set to 16 VFs 00:06:30.519 Properly configured the qat device with driver uio_pci_generic. 00:06:30.519 04:01:39 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:30.519 04:01:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:30.519 04:01:39 -- common/autotest_common.sh@10 -- # set +x 00:06:30.519 04:01:39 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:30.519 04:01:39 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:06:30.519 04:01:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.519 04:01:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.519 04:01:39 -- common/autotest_common.sh@10 -- # set +x 00:06:30.519 ************************************ 00:06:30.519 START TEST env 00:06:30.519 ************************************ 00:06:30.519 04:01:39 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:06:30.519 * Looking for test storage... 00:06:30.519 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env 00:06:30.520 04:01:39 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:06:30.520 04:01:39 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.520 04:01:39 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.520 04:01:39 env -- common/autotest_common.sh@10 -- # set +x 00:06:30.520 ************************************ 00:06:30.520 START TEST env_memory 00:06:30.520 ************************************ 00:06:30.520 04:01:39 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:06:30.520 00:06:30.520 00:06:30.520 CUnit - A unit testing framework for C - Version 2.1-3 00:06:30.520 http://cunit.sourceforge.net/ 00:06:30.520 00:06:30.520 00:06:30.520 Suite: memory 00:06:30.779 Test: alloc and free memory map ...[2024-07-23 04:01:39.370633] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:30.779 passed 00:06:30.779 Test: mem map translation ...[2024-07-23 04:01:39.504700] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:30.779 [2024-07-23 04:01:39.504801] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:30.779 [2024-07-23 04:01:39.505013] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:30.779 [2024-07-23 04:01:39.505081] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:31.039 passed 00:06:31.039 Test: mem map registration ...[2024-07-23 04:01:39.714273] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:31.039 [2024-07-23 04:01:39.714354] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:31.039 passed 00:06:31.298 Test: mem map adjacent registrations ...passed 00:06:31.298 00:06:31.298 Run Summary: Type Total Ran Passed Failed Inactive 00:06:31.298 suites 1 1 n/a 0 0 00:06:31.298 tests 4 4 4 0 0 00:06:31.298 asserts 152 152 152 0 n/a 00:06:31.298 00:06:31.298 Elapsed time = 0.736 seconds 00:06:31.298 00:06:31.298 real 0m0.786s 00:06:31.298 user 0m0.736s 00:06:31.298 sys 0m0.046s 00:06:31.298 04:01:40 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.298 04:01:40 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:31.298 ************************************ 00:06:31.298 END TEST env_memory 00:06:31.298 ************************************ 00:06:31.298 04:01:40 env -- common/autotest_common.sh@1142 -- # return 0 00:06:31.298 04:01:40 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:31.298 04:01:40 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.298 04:01:40 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.298 04:01:40 env -- common/autotest_common.sh@10 -- # set +x 00:06:31.560 ************************************ 00:06:31.560 START TEST env_vtophys 00:06:31.560 ************************************ 00:06:31.560 04:01:40 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:31.560 EAL: lib.eal log level changed from notice to debug 00:06:31.560 EAL: Detected lcore 0 as core 0 on socket 0 00:06:31.560 EAL: Detected lcore 1 as core 1 on socket 0 00:06:31.560 EAL: Detected lcore 2 as core 2 on socket 0 00:06:31.560 EAL: Detected lcore 3 as core 3 on socket 0 00:06:31.560 EAL: Detected lcore 4 as core 4 on socket 0 00:06:31.560 EAL: Detected lcore 5 as core 5 on socket 0 00:06:31.560 EAL: Detected lcore 6 as core 6 on socket 0 00:06:31.560 EAL: Detected lcore 7 as core 8 on socket 0 00:06:31.560 EAL: Detected lcore 8 as core 9 on socket 0 00:06:31.560 EAL: Detected lcore 9 as core 10 on socket 0 00:06:31.560 EAL: Detected lcore 10 as core 11 on socket 0 00:06:31.560 EAL: Detected lcore 11 as core 12 on socket 0 00:06:31.560 EAL: Detected lcore 12 as core 13 on socket 0 00:06:31.560 EAL: Detected lcore 13 as core 14 on socket 0 00:06:31.560 EAL: Detected lcore 14 as core 16 on socket 0 00:06:31.560 EAL: Detected lcore 15 as core 17 on socket 0 00:06:31.560 EAL: Detected lcore 16 as core 18 on socket 0 00:06:31.560 EAL: Detected lcore 17 as core 19 on socket 0 00:06:31.560 EAL: Detected lcore 18 as core 20 on socket 0 00:06:31.560 EAL: Detected lcore 19 as core 21 on socket 0 00:06:31.560 EAL: Detected lcore 20 as core 22 on socket 0 00:06:31.560 EAL: Detected lcore 21 as core 24 on socket 0 00:06:31.560 EAL: Detected lcore 22 as core 25 on socket 0 00:06:31.560 EAL: Detected lcore 23 as core 26 on socket 0 00:06:31.560 EAL: Detected lcore 24 as core 27 on socket 0 00:06:31.560 EAL: Detected lcore 25 as core 28 on socket 0 00:06:31.560 EAL: Detected lcore 26 as core 29 on socket 0 00:06:31.560 EAL: Detected lcore 27 as core 30 on socket 0 00:06:31.560 EAL: Detected lcore 28 as core 0 on socket 1 00:06:31.560 EAL: Detected lcore 29 as core 1 on socket 1 00:06:31.560 EAL: Detected lcore 30 as core 2 on socket 1 00:06:31.560 EAL: Detected lcore 31 as core 3 on socket 1 00:06:31.560 EAL: Detected lcore 32 as core 4 on socket 1 00:06:31.560 EAL: Detected lcore 33 as core 5 on socket 1 00:06:31.560 EAL: Detected lcore 34 as core 6 on socket 1 00:06:31.560 EAL: Detected lcore 35 as core 8 on socket 1 00:06:31.560 EAL: Detected lcore 36 as core 9 on socket 1 00:06:31.560 EAL: Detected lcore 37 as core 10 on socket 1 00:06:31.560 EAL: Detected lcore 38 as core 11 on socket 1 00:06:31.560 EAL: Detected lcore 39 as core 12 on socket 1 00:06:31.560 EAL: Detected lcore 40 as core 13 on socket 1 00:06:31.560 EAL: Detected lcore 41 as core 14 on socket 1 00:06:31.560 EAL: Detected lcore 42 as core 16 on socket 1 00:06:31.560 EAL: Detected lcore 43 as core 17 on socket 1 00:06:31.560 EAL: Detected lcore 44 as core 18 on socket 1 00:06:31.560 EAL: Detected lcore 45 as core 19 on socket 1 00:06:31.560 EAL: Detected lcore 46 as core 20 on socket 1 00:06:31.560 EAL: Detected lcore 47 as core 21 on socket 1 00:06:31.560 EAL: Detected lcore 48 as core 22 on socket 1 00:06:31.560 EAL: Detected lcore 49 as core 24 on socket 1 00:06:31.560 EAL: Detected lcore 50 as core 25 on socket 1 00:06:31.560 EAL: Detected lcore 51 as core 26 on socket 1 00:06:31.560 EAL: Detected lcore 52 as core 27 on socket 1 00:06:31.560 EAL: Detected lcore 53 as core 28 on socket 1 00:06:31.560 EAL: Detected lcore 54 as core 29 on socket 1 00:06:31.560 EAL: Detected lcore 55 as core 30 on socket 1 00:06:31.560 EAL: Detected lcore 56 as core 0 on socket 0 00:06:31.560 EAL: Detected lcore 57 as core 1 on socket 0 00:06:31.560 EAL: Detected lcore 58 as core 2 on socket 0 00:06:31.560 EAL: Detected lcore 59 as core 3 on socket 0 00:06:31.560 EAL: Detected lcore 60 as core 4 on socket 0 00:06:31.560 EAL: Detected lcore 61 as core 5 on socket 0 00:06:31.560 EAL: Detected lcore 62 as core 6 on socket 0 00:06:31.560 EAL: Detected lcore 63 as core 8 on socket 0 00:06:31.560 EAL: Detected lcore 64 as core 9 on socket 0 00:06:31.560 EAL: Detected lcore 65 as core 10 on socket 0 00:06:31.560 EAL: Detected lcore 66 as core 11 on socket 0 00:06:31.560 EAL: Detected lcore 67 as core 12 on socket 0 00:06:31.560 EAL: Detected lcore 68 as core 13 on socket 0 00:06:31.560 EAL: Detected lcore 69 as core 14 on socket 0 00:06:31.560 EAL: Detected lcore 70 as core 16 on socket 0 00:06:31.560 EAL: Detected lcore 71 as core 17 on socket 0 00:06:31.560 EAL: Detected lcore 72 as core 18 on socket 0 00:06:31.560 EAL: Detected lcore 73 as core 19 on socket 0 00:06:31.560 EAL: Detected lcore 74 as core 20 on socket 0 00:06:31.560 EAL: Detected lcore 75 as core 21 on socket 0 00:06:31.560 EAL: Detected lcore 76 as core 22 on socket 0 00:06:31.560 EAL: Detected lcore 77 as core 24 on socket 0 00:06:31.560 EAL: Detected lcore 78 as core 25 on socket 0 00:06:31.560 EAL: Detected lcore 79 as core 26 on socket 0 00:06:31.560 EAL: Detected lcore 80 as core 27 on socket 0 00:06:31.560 EAL: Detected lcore 81 as core 28 on socket 0 00:06:31.560 EAL: Detected lcore 82 as core 29 on socket 0 00:06:31.560 EAL: Detected lcore 83 as core 30 on socket 0 00:06:31.560 EAL: Detected lcore 84 as core 0 on socket 1 00:06:31.560 EAL: Detected lcore 85 as core 1 on socket 1 00:06:31.560 EAL: Detected lcore 86 as core 2 on socket 1 00:06:31.560 EAL: Detected lcore 87 as core 3 on socket 1 00:06:31.560 EAL: Detected lcore 88 as core 4 on socket 1 00:06:31.560 EAL: Detected lcore 89 as core 5 on socket 1 00:06:31.560 EAL: Detected lcore 90 as core 6 on socket 1 00:06:31.560 EAL: Detected lcore 91 as core 8 on socket 1 00:06:31.560 EAL: Detected lcore 92 as core 9 on socket 1 00:06:31.560 EAL: Detected lcore 93 as core 10 on socket 1 00:06:31.560 EAL: Detected lcore 94 as core 11 on socket 1 00:06:31.560 EAL: Detected lcore 95 as core 12 on socket 1 00:06:31.560 EAL: Detected lcore 96 as core 13 on socket 1 00:06:31.560 EAL: Detected lcore 97 as core 14 on socket 1 00:06:31.560 EAL: Detected lcore 98 as core 16 on socket 1 00:06:31.560 EAL: Detected lcore 99 as core 17 on socket 1 00:06:31.560 EAL: Detected lcore 100 as core 18 on socket 1 00:06:31.560 EAL: Detected lcore 101 as core 19 on socket 1 00:06:31.560 EAL: Detected lcore 102 as core 20 on socket 1 00:06:31.560 EAL: Detected lcore 103 as core 21 on socket 1 00:06:31.560 EAL: Detected lcore 104 as core 22 on socket 1 00:06:31.560 EAL: Detected lcore 105 as core 24 on socket 1 00:06:31.560 EAL: Detected lcore 106 as core 25 on socket 1 00:06:31.560 EAL: Detected lcore 107 as core 26 on socket 1 00:06:31.560 EAL: Detected lcore 108 as core 27 on socket 1 00:06:31.560 EAL: Detected lcore 109 as core 28 on socket 1 00:06:31.561 EAL: Detected lcore 110 as core 29 on socket 1 00:06:31.561 EAL: Detected lcore 111 as core 30 on socket 1 00:06:31.561 EAL: Maximum logical cores by configuration: 128 00:06:31.561 EAL: Detected CPU lcores: 112 00:06:31.561 EAL: Detected NUMA nodes: 2 00:06:31.561 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:31.561 EAL: Detected shared linkage of DPDK 00:06:31.561 EAL: No shared files mode enabled, IPC will be disabled 00:06:31.561 EAL: No shared files mode enabled, IPC is disabled 00:06:31.561 EAL: PCI driver qat for device 0000:1a:01.0 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1a:01.1 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1a:01.2 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1a:01.3 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1a:01.4 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1a:01.5 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1a:01.6 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1a:01.7 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1a:02.0 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1a:02.1 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1a:02.2 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1a:02.3 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1a:02.4 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1a:02.5 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1a:02.6 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1a:02.7 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1c:01.0 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1c:01.1 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1c:01.2 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1c:01.3 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1c:01.4 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1c:01.5 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1c:01.6 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1c:01.7 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1c:02.0 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1c:02.1 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1c:02.2 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1c:02.3 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1c:02.4 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1c:02.5 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1c:02.6 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1c:02.7 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1e:01.0 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1e:01.1 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1e:01.2 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1e:01.3 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1e:01.4 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1e:01.5 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1e:01.6 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1e:01.7 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1e:02.0 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1e:02.1 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1e:02.2 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1e:02.3 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1e:02.4 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1e:02.5 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1e:02.6 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:1e:02.7 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3d:01.0 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3d:01.1 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3d:01.2 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3d:01.3 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3d:01.4 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3d:01.5 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3d:01.6 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3d:01.7 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3d:02.0 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3d:02.1 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3d:02.2 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3d:02.3 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3d:02.4 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3d:02.5 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3d:02.6 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3d:02.7 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3f:01.0 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3f:01.1 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3f:01.2 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3f:01.3 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3f:01.4 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3f:01.5 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3f:01.6 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3f:01.7 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3f:02.0 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3f:02.1 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3f:02.2 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3f:02.3 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3f:02.4 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3f:02.5 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3f:02.6 wants IOVA as 'PA' 00:06:31.561 EAL: PCI driver qat for device 0000:3f:02.7 wants IOVA as 'PA' 00:06:31.561 EAL: Bus pci wants IOVA as 'PA' 00:06:31.561 EAL: Bus auxiliary wants IOVA as 'DC' 00:06:31.561 EAL: Bus vdev wants IOVA as 'DC' 00:06:31.561 EAL: Selected IOVA mode 'PA' 00:06:31.561 EAL: Probing VFIO support... 00:06:31.561 EAL: IOMMU type 1 (Type 1) is supported 00:06:31.561 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:31.561 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:31.561 EAL: VFIO support initialized 00:06:31.561 EAL: Ask a virtual area of 0x2e000 bytes 00:06:31.561 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:31.561 EAL: Setting up physically contiguous memory... 00:06:31.561 EAL: Setting maximum number of open files to 524288 00:06:31.561 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:31.561 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:31.561 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:31.561 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.561 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:31.561 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:31.561 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.561 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:31.561 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:31.561 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.561 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:31.561 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:31.561 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.561 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:31.561 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:31.561 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.561 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:31.561 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:31.561 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.561 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:31.561 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:31.561 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.561 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:31.561 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:31.561 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.561 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:31.561 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:31.561 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:31.561 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.561 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:31.561 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:31.561 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.561 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:31.561 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:31.561 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.561 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:31.561 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:31.561 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.561 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:31.561 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:31.561 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.561 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:31.561 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:31.561 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.561 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:31.561 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:31.561 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.561 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:31.561 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:31.561 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.561 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:31.561 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:31.561 EAL: Hugepages will be freed exactly as allocated. 00:06:31.561 EAL: No shared files mode enabled, IPC is disabled 00:06:31.561 EAL: No shared files mode enabled, IPC is disabled 00:06:31.561 EAL: TSC frequency is ~2500000 KHz 00:06:31.561 EAL: Main lcore 0 is ready (tid=7fcbaa170b40;cpuset=[0]) 00:06:31.561 EAL: Trying to obtain current memory policy. 00:06:31.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.561 EAL: Restoring previous memory policy: 0 00:06:31.561 EAL: request: mp_malloc_sync 00:06:31.562 EAL: No shared files mode enabled, IPC is disabled 00:06:31.562 EAL: Heap on socket 0 was expanded by 2MB 00:06:31.562 EAL: PCI device 0000:1a:01.0 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001000000 00:06:31.562 EAL: PCI memory mapped at 0x202001001000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:06:31.562 EAL: PCI device 0000:1a:01.1 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001002000 00:06:31.562 EAL: PCI memory mapped at 0x202001003000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:06:31.562 EAL: PCI device 0000:1a:01.2 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001004000 00:06:31.562 EAL: PCI memory mapped at 0x202001005000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:06:31.562 EAL: PCI device 0000:1a:01.3 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001006000 00:06:31.562 EAL: PCI memory mapped at 0x202001007000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:06:31.562 EAL: PCI device 0000:1a:01.4 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001008000 00:06:31.562 EAL: PCI memory mapped at 0x202001009000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:06:31.562 EAL: PCI device 0000:1a:01.5 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x20200100a000 00:06:31.562 EAL: PCI memory mapped at 0x20200100b000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:06:31.562 EAL: PCI device 0000:1a:01.6 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x20200100c000 00:06:31.562 EAL: PCI memory mapped at 0x20200100d000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:06:31.562 EAL: PCI device 0000:1a:01.7 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x20200100e000 00:06:31.562 EAL: PCI memory mapped at 0x20200100f000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:06:31.562 EAL: PCI device 0000:1a:02.0 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001010000 00:06:31.562 EAL: PCI memory mapped at 0x202001011000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:06:31.562 EAL: PCI device 0000:1a:02.1 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001012000 00:06:31.562 EAL: PCI memory mapped at 0x202001013000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:06:31.562 EAL: PCI device 0000:1a:02.2 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001014000 00:06:31.562 EAL: PCI memory mapped at 0x202001015000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:06:31.562 EAL: PCI device 0000:1a:02.3 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001016000 00:06:31.562 EAL: PCI memory mapped at 0x202001017000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:06:31.562 EAL: PCI device 0000:1a:02.4 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001018000 00:06:31.562 EAL: PCI memory mapped at 0x202001019000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:06:31.562 EAL: PCI device 0000:1a:02.5 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x20200101a000 00:06:31.562 EAL: PCI memory mapped at 0x20200101b000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:06:31.562 EAL: PCI device 0000:1a:02.6 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x20200101c000 00:06:31.562 EAL: PCI memory mapped at 0x20200101d000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:06:31.562 EAL: PCI device 0000:1a:02.7 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x20200101e000 00:06:31.562 EAL: PCI memory mapped at 0x20200101f000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:06:31.562 EAL: PCI device 0000:1c:01.0 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001020000 00:06:31.562 EAL: PCI memory mapped at 0x202001021000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:06:31.562 EAL: PCI device 0000:1c:01.1 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001022000 00:06:31.562 EAL: PCI memory mapped at 0x202001023000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:06:31.562 EAL: PCI device 0000:1c:01.2 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001024000 00:06:31.562 EAL: PCI memory mapped at 0x202001025000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:06:31.562 EAL: PCI device 0000:1c:01.3 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001026000 00:06:31.562 EAL: PCI memory mapped at 0x202001027000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:06:31.562 EAL: PCI device 0000:1c:01.4 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001028000 00:06:31.562 EAL: PCI memory mapped at 0x202001029000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:06:31.562 EAL: PCI device 0000:1c:01.5 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x20200102a000 00:06:31.562 EAL: PCI memory mapped at 0x20200102b000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:06:31.562 EAL: PCI device 0000:1c:01.6 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x20200102c000 00:06:31.562 EAL: PCI memory mapped at 0x20200102d000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:06:31.562 EAL: PCI device 0000:1c:01.7 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x20200102e000 00:06:31.562 EAL: PCI memory mapped at 0x20200102f000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:06:31.562 EAL: PCI device 0000:1c:02.0 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001030000 00:06:31.562 EAL: PCI memory mapped at 0x202001031000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:06:31.562 EAL: PCI device 0000:1c:02.1 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001032000 00:06:31.562 EAL: PCI memory mapped at 0x202001033000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:06:31.562 EAL: PCI device 0000:1c:02.2 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001034000 00:06:31.562 EAL: PCI memory mapped at 0x202001035000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:06:31.562 EAL: PCI device 0000:1c:02.3 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001036000 00:06:31.562 EAL: PCI memory mapped at 0x202001037000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:06:31.562 EAL: PCI device 0000:1c:02.4 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001038000 00:06:31.562 EAL: PCI memory mapped at 0x202001039000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:06:31.562 EAL: PCI device 0000:1c:02.5 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x20200103a000 00:06:31.562 EAL: PCI memory mapped at 0x20200103b000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:06:31.562 EAL: PCI device 0000:1c:02.6 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x20200103c000 00:06:31.562 EAL: PCI memory mapped at 0x20200103d000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:06:31.562 EAL: PCI device 0000:1c:02.7 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x20200103e000 00:06:31.562 EAL: PCI memory mapped at 0x20200103f000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:06:31.562 EAL: PCI device 0000:1e:01.0 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001040000 00:06:31.562 EAL: PCI memory mapped at 0x202001041000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:06:31.562 EAL: PCI device 0000:1e:01.1 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001042000 00:06:31.562 EAL: PCI memory mapped at 0x202001043000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:06:31.562 EAL: PCI device 0000:1e:01.2 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001044000 00:06:31.562 EAL: PCI memory mapped at 0x202001045000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:06:31.562 EAL: PCI device 0000:1e:01.3 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.562 EAL: PCI memory mapped at 0x202001046000 00:06:31.562 EAL: PCI memory mapped at 0x202001047000 00:06:31.562 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:06:31.562 EAL: PCI device 0000:1e:01.4 on NUMA socket 0 00:06:31.562 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x202001048000 00:06:31.563 EAL: PCI memory mapped at 0x202001049000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:06:31.563 EAL: PCI device 0000:1e:01.5 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x20200104a000 00:06:31.563 EAL: PCI memory mapped at 0x20200104b000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:06:31.563 EAL: PCI device 0000:1e:01.6 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x20200104c000 00:06:31.563 EAL: PCI memory mapped at 0x20200104d000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:06:31.563 EAL: PCI device 0000:1e:01.7 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x20200104e000 00:06:31.563 EAL: PCI memory mapped at 0x20200104f000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:06:31.563 EAL: PCI device 0000:1e:02.0 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x202001050000 00:06:31.563 EAL: PCI memory mapped at 0x202001051000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:06:31.563 EAL: PCI device 0000:1e:02.1 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x202001052000 00:06:31.563 EAL: PCI memory mapped at 0x202001053000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:06:31.563 EAL: PCI device 0000:1e:02.2 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x202001054000 00:06:31.563 EAL: PCI memory mapped at 0x202001055000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:06:31.563 EAL: PCI device 0000:1e:02.3 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x202001056000 00:06:31.563 EAL: PCI memory mapped at 0x202001057000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:06:31.563 EAL: PCI device 0000:1e:02.4 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x202001058000 00:06:31.563 EAL: PCI memory mapped at 0x202001059000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:06:31.563 EAL: PCI device 0000:1e:02.5 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x20200105a000 00:06:31.563 EAL: PCI memory mapped at 0x20200105b000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:06:31.563 EAL: PCI device 0000:1e:02.6 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x20200105c000 00:06:31.563 EAL: PCI memory mapped at 0x20200105d000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:06:31.563 EAL: PCI device 0000:1e:02.7 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x20200105e000 00:06:31.563 EAL: PCI memory mapped at 0x20200105f000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:06:31.563 EAL: PCI device 0000:3d:01.0 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x202001060000 00:06:31.563 EAL: PCI memory mapped at 0x202001061000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:06:31.563 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.563 EAL: PCI memory unmapped at 0x202001060000 00:06:31.563 EAL: PCI memory unmapped at 0x202001061000 00:06:31.563 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:31.563 EAL: PCI device 0000:3d:01.1 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x202001062000 00:06:31.563 EAL: PCI memory mapped at 0x202001063000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:06:31.563 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.563 EAL: PCI memory unmapped at 0x202001062000 00:06:31.563 EAL: PCI memory unmapped at 0x202001063000 00:06:31.563 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:31.563 EAL: PCI device 0000:3d:01.2 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x202001064000 00:06:31.563 EAL: PCI memory mapped at 0x202001065000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:06:31.563 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.563 EAL: PCI memory unmapped at 0x202001064000 00:06:31.563 EAL: PCI memory unmapped at 0x202001065000 00:06:31.563 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:31.563 EAL: PCI device 0000:3d:01.3 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x202001066000 00:06:31.563 EAL: PCI memory mapped at 0x202001067000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:06:31.563 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.563 EAL: PCI memory unmapped at 0x202001066000 00:06:31.563 EAL: PCI memory unmapped at 0x202001067000 00:06:31.563 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:31.563 EAL: PCI device 0000:3d:01.4 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x202001068000 00:06:31.563 EAL: PCI memory mapped at 0x202001069000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:06:31.563 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.563 EAL: PCI memory unmapped at 0x202001068000 00:06:31.563 EAL: PCI memory unmapped at 0x202001069000 00:06:31.563 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:31.563 EAL: PCI device 0000:3d:01.5 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x20200106a000 00:06:31.563 EAL: PCI memory mapped at 0x20200106b000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:06:31.563 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.563 EAL: PCI memory unmapped at 0x20200106a000 00:06:31.563 EAL: PCI memory unmapped at 0x20200106b000 00:06:31.563 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:31.563 EAL: PCI device 0000:3d:01.6 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x20200106c000 00:06:31.563 EAL: PCI memory mapped at 0x20200106d000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:06:31.563 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.563 EAL: PCI memory unmapped at 0x20200106c000 00:06:31.563 EAL: PCI memory unmapped at 0x20200106d000 00:06:31.563 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:31.563 EAL: PCI device 0000:3d:01.7 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x20200106e000 00:06:31.563 EAL: PCI memory mapped at 0x20200106f000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:06:31.563 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.563 EAL: PCI memory unmapped at 0x20200106e000 00:06:31.563 EAL: PCI memory unmapped at 0x20200106f000 00:06:31.563 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:31.563 EAL: PCI device 0000:3d:02.0 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x202001070000 00:06:31.563 EAL: PCI memory mapped at 0x202001071000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:06:31.563 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.563 EAL: PCI memory unmapped at 0x202001070000 00:06:31.563 EAL: PCI memory unmapped at 0x202001071000 00:06:31.563 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:31.563 EAL: PCI device 0000:3d:02.1 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x202001072000 00:06:31.563 EAL: PCI memory mapped at 0x202001073000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:06:31.563 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.563 EAL: PCI memory unmapped at 0x202001072000 00:06:31.563 EAL: PCI memory unmapped at 0x202001073000 00:06:31.563 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:31.563 EAL: PCI device 0000:3d:02.2 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x202001074000 00:06:31.563 EAL: PCI memory mapped at 0x202001075000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:06:31.563 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.563 EAL: PCI memory unmapped at 0x202001074000 00:06:31.563 EAL: PCI memory unmapped at 0x202001075000 00:06:31.563 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:31.563 EAL: PCI device 0000:3d:02.3 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x202001076000 00:06:31.563 EAL: PCI memory mapped at 0x202001077000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:06:31.563 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.563 EAL: PCI memory unmapped at 0x202001076000 00:06:31.563 EAL: PCI memory unmapped at 0x202001077000 00:06:31.563 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:31.563 EAL: PCI device 0000:3d:02.4 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x202001078000 00:06:31.563 EAL: PCI memory mapped at 0x202001079000 00:06:31.563 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:06:31.563 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.563 EAL: PCI memory unmapped at 0x202001078000 00:06:31.563 EAL: PCI memory unmapped at 0x202001079000 00:06:31.563 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:31.563 EAL: PCI device 0000:3d:02.5 on NUMA socket 0 00:06:31.563 EAL: probe driver: 8086:37c9 qat 00:06:31.563 EAL: PCI memory mapped at 0x20200107a000 00:06:31.563 EAL: PCI memory mapped at 0x20200107b000 00:06:31.564 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:06:31.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.564 EAL: PCI memory unmapped at 0x20200107a000 00:06:31.564 EAL: PCI memory unmapped at 0x20200107b000 00:06:31.564 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:31.564 EAL: PCI device 0000:3d:02.6 on NUMA socket 0 00:06:31.564 EAL: probe driver: 8086:37c9 qat 00:06:31.564 EAL: PCI memory mapped at 0x20200107c000 00:06:31.564 EAL: PCI memory mapped at 0x20200107d000 00:06:31.564 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:06:31.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.564 EAL: PCI memory unmapped at 0x20200107c000 00:06:31.564 EAL: PCI memory unmapped at 0x20200107d000 00:06:31.564 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:31.564 EAL: PCI device 0000:3d:02.7 on NUMA socket 0 00:06:31.564 EAL: probe driver: 8086:37c9 qat 00:06:31.564 EAL: PCI memory mapped at 0x20200107e000 00:06:31.564 EAL: PCI memory mapped at 0x20200107f000 00:06:31.564 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:06:31.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.564 EAL: PCI memory unmapped at 0x20200107e000 00:06:31.564 EAL: PCI memory unmapped at 0x20200107f000 00:06:31.564 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:31.564 EAL: PCI device 0000:3f:01.0 on NUMA socket 0 00:06:31.564 EAL: probe driver: 8086:37c9 qat 00:06:31.564 EAL: PCI memory mapped at 0x202001080000 00:06:31.564 EAL: PCI memory mapped at 0x202001081000 00:06:31.564 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:06:31.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.564 EAL: PCI memory unmapped at 0x202001080000 00:06:31.564 EAL: PCI memory unmapped at 0x202001081000 00:06:31.564 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:31.564 EAL: PCI device 0000:3f:01.1 on NUMA socket 0 00:06:31.564 EAL: probe driver: 8086:37c9 qat 00:06:31.564 EAL: PCI memory mapped at 0x202001082000 00:06:31.564 EAL: PCI memory mapped at 0x202001083000 00:06:31.564 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:06:31.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.564 EAL: PCI memory unmapped at 0x202001082000 00:06:31.564 EAL: PCI memory unmapped at 0x202001083000 00:06:31.564 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:31.564 EAL: PCI device 0000:3f:01.2 on NUMA socket 0 00:06:31.564 EAL: probe driver: 8086:37c9 qat 00:06:31.564 EAL: PCI memory mapped at 0x202001084000 00:06:31.564 EAL: PCI memory mapped at 0x202001085000 00:06:31.564 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:06:31.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.564 EAL: PCI memory unmapped at 0x202001084000 00:06:31.564 EAL: PCI memory unmapped at 0x202001085000 00:06:31.564 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:31.564 EAL: PCI device 0000:3f:01.3 on NUMA socket 0 00:06:31.564 EAL: probe driver: 8086:37c9 qat 00:06:31.564 EAL: PCI memory mapped at 0x202001086000 00:06:31.564 EAL: PCI memory mapped at 0x202001087000 00:06:31.564 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:06:31.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.564 EAL: PCI memory unmapped at 0x202001086000 00:06:31.564 EAL: PCI memory unmapped at 0x202001087000 00:06:31.564 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:31.564 EAL: PCI device 0000:3f:01.4 on NUMA socket 0 00:06:31.564 EAL: probe driver: 8086:37c9 qat 00:06:31.564 EAL: PCI memory mapped at 0x202001088000 00:06:31.564 EAL: PCI memory mapped at 0x202001089000 00:06:31.564 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:06:31.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.564 EAL: PCI memory unmapped at 0x202001088000 00:06:31.564 EAL: PCI memory unmapped at 0x202001089000 00:06:31.564 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:31.564 EAL: PCI device 0000:3f:01.5 on NUMA socket 0 00:06:31.564 EAL: probe driver: 8086:37c9 qat 00:06:31.564 EAL: PCI memory mapped at 0x20200108a000 00:06:31.564 EAL: PCI memory mapped at 0x20200108b000 00:06:31.564 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:06:31.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.564 EAL: PCI memory unmapped at 0x20200108a000 00:06:31.564 EAL: PCI memory unmapped at 0x20200108b000 00:06:31.564 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:31.564 EAL: PCI device 0000:3f:01.6 on NUMA socket 0 00:06:31.564 EAL: probe driver: 8086:37c9 qat 00:06:31.564 EAL: PCI memory mapped at 0x20200108c000 00:06:31.564 EAL: PCI memory mapped at 0x20200108d000 00:06:31.564 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:06:31.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.564 EAL: PCI memory unmapped at 0x20200108c000 00:06:31.564 EAL: PCI memory unmapped at 0x20200108d000 00:06:31.564 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:31.564 EAL: PCI device 0000:3f:01.7 on NUMA socket 0 00:06:31.564 EAL: probe driver: 8086:37c9 qat 00:06:31.564 EAL: PCI memory mapped at 0x20200108e000 00:06:31.564 EAL: PCI memory mapped at 0x20200108f000 00:06:31.564 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:06:31.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.564 EAL: PCI memory unmapped at 0x20200108e000 00:06:31.564 EAL: PCI memory unmapped at 0x20200108f000 00:06:31.564 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:31.564 EAL: PCI device 0000:3f:02.0 on NUMA socket 0 00:06:31.564 EAL: probe driver: 8086:37c9 qat 00:06:31.564 EAL: PCI memory mapped at 0x202001090000 00:06:31.564 EAL: PCI memory mapped at 0x202001091000 00:06:31.564 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:06:31.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.564 EAL: PCI memory unmapped at 0x202001090000 00:06:31.564 EAL: PCI memory unmapped at 0x202001091000 00:06:31.564 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:31.564 EAL: PCI device 0000:3f:02.1 on NUMA socket 0 00:06:31.564 EAL: probe driver: 8086:37c9 qat 00:06:31.564 EAL: PCI memory mapped at 0x202001092000 00:06:31.564 EAL: PCI memory mapped at 0x202001093000 00:06:31.564 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:06:31.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.564 EAL: PCI memory unmapped at 0x202001092000 00:06:31.564 EAL: PCI memory unmapped at 0x202001093000 00:06:31.564 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:31.564 EAL: PCI device 0000:3f:02.2 on NUMA socket 0 00:06:31.564 EAL: probe driver: 8086:37c9 qat 00:06:31.564 EAL: PCI memory mapped at 0x202001094000 00:06:31.564 EAL: PCI memory mapped at 0x202001095000 00:06:31.564 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:06:31.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.564 EAL: PCI memory unmapped at 0x202001094000 00:06:31.564 EAL: PCI memory unmapped at 0x202001095000 00:06:31.564 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:31.564 EAL: PCI device 0000:3f:02.3 on NUMA socket 0 00:06:31.564 EAL: probe driver: 8086:37c9 qat 00:06:31.564 EAL: PCI memory mapped at 0x202001096000 00:06:31.564 EAL: PCI memory mapped at 0x202001097000 00:06:31.564 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:06:31.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.564 EAL: PCI memory unmapped at 0x202001096000 00:06:31.564 EAL: PCI memory unmapped at 0x202001097000 00:06:31.564 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:31.564 EAL: PCI device 0000:3f:02.4 on NUMA socket 0 00:06:31.564 EAL: probe driver: 8086:37c9 qat 00:06:31.564 EAL: PCI memory mapped at 0x202001098000 00:06:31.564 EAL: PCI memory mapped at 0x202001099000 00:06:31.564 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:06:31.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.564 EAL: PCI memory unmapped at 0x202001098000 00:06:31.564 EAL: PCI memory unmapped at 0x202001099000 00:06:31.564 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:31.564 EAL: PCI device 0000:3f:02.5 on NUMA socket 0 00:06:31.564 EAL: probe driver: 8086:37c9 qat 00:06:31.564 EAL: PCI memory mapped at 0x20200109a000 00:06:31.564 EAL: PCI memory mapped at 0x20200109b000 00:06:31.564 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:06:31.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.564 EAL: PCI memory unmapped at 0x20200109a000 00:06:31.564 EAL: PCI memory unmapped at 0x20200109b000 00:06:31.564 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:31.564 EAL: PCI device 0000:3f:02.6 on NUMA socket 0 00:06:31.564 EAL: probe driver: 8086:37c9 qat 00:06:31.564 EAL: PCI memory mapped at 0x20200109c000 00:06:31.564 EAL: PCI memory mapped at 0x20200109d000 00:06:31.564 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:06:31.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.564 EAL: PCI memory unmapped at 0x20200109c000 00:06:31.564 EAL: PCI memory unmapped at 0x20200109d000 00:06:31.564 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:31.564 EAL: PCI device 0000:3f:02.7 on NUMA socket 0 00:06:31.564 EAL: probe driver: 8086:37c9 qat 00:06:31.564 EAL: PCI memory mapped at 0x20200109e000 00:06:31.564 EAL: PCI memory mapped at 0x20200109f000 00:06:31.564 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:06:31.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.564 EAL: PCI memory unmapped at 0x20200109e000 00:06:31.564 EAL: PCI memory unmapped at 0x20200109f000 00:06:31.564 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:31.564 EAL: No shared files mode enabled, IPC is disabled 00:06:31.824 EAL: No shared files mode enabled, IPC is disabled 00:06:31.824 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:31.824 EAL: Mem event callback 'spdk:(nil)' registered 00:06:31.824 00:06:31.824 00:06:31.824 CUnit - A unit testing framework for C - Version 2.1-3 00:06:31.824 http://cunit.sourceforge.net/ 00:06:31.824 00:06:31.824 00:06:31.824 Suite: components_suite 00:06:32.084 Test: vtophys_malloc_test ...passed 00:06:32.084 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:32.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:32.084 EAL: Restoring previous memory policy: 4 00:06:32.084 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.084 EAL: request: mp_malloc_sync 00:06:32.084 EAL: No shared files mode enabled, IPC is disabled 00:06:32.084 EAL: Heap on socket 0 was expanded by 4MB 00:06:32.084 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.084 EAL: request: mp_malloc_sync 00:06:32.084 EAL: No shared files mode enabled, IPC is disabled 00:06:32.084 EAL: Heap on socket 0 was shrunk by 4MB 00:06:32.084 EAL: Trying to obtain current memory policy. 00:06:32.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:32.084 EAL: Restoring previous memory policy: 4 00:06:32.084 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.084 EAL: request: mp_malloc_sync 00:06:32.084 EAL: No shared files mode enabled, IPC is disabled 00:06:32.084 EAL: Heap on socket 0 was expanded by 6MB 00:06:32.084 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.084 EAL: request: mp_malloc_sync 00:06:32.084 EAL: No shared files mode enabled, IPC is disabled 00:06:32.084 EAL: Heap on socket 0 was shrunk by 6MB 00:06:32.343 EAL: Trying to obtain current memory policy. 00:06:32.343 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:32.343 EAL: Restoring previous memory policy: 4 00:06:32.343 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.343 EAL: request: mp_malloc_sync 00:06:32.343 EAL: No shared files mode enabled, IPC is disabled 00:06:32.343 EAL: Heap on socket 0 was expanded by 10MB 00:06:32.343 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.343 EAL: request: mp_malloc_sync 00:06:32.343 EAL: No shared files mode enabled, IPC is disabled 00:06:32.343 EAL: Heap on socket 0 was shrunk by 10MB 00:06:32.343 EAL: Trying to obtain current memory policy. 00:06:32.343 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:32.343 EAL: Restoring previous memory policy: 4 00:06:32.343 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.343 EAL: request: mp_malloc_sync 00:06:32.343 EAL: No shared files mode enabled, IPC is disabled 00:06:32.343 EAL: Heap on socket 0 was expanded by 18MB 00:06:32.343 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.343 EAL: request: mp_malloc_sync 00:06:32.343 EAL: No shared files mode enabled, IPC is disabled 00:06:32.343 EAL: Heap on socket 0 was shrunk by 18MB 00:06:32.343 EAL: Trying to obtain current memory policy. 00:06:32.343 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:32.343 EAL: Restoring previous memory policy: 4 00:06:32.343 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.343 EAL: request: mp_malloc_sync 00:06:32.343 EAL: No shared files mode enabled, IPC is disabled 00:06:32.343 EAL: Heap on socket 0 was expanded by 34MB 00:06:32.343 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.343 EAL: request: mp_malloc_sync 00:06:32.343 EAL: No shared files mode enabled, IPC is disabled 00:06:32.343 EAL: Heap on socket 0 was shrunk by 34MB 00:06:32.603 EAL: Trying to obtain current memory policy. 00:06:32.603 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:32.603 EAL: Restoring previous memory policy: 4 00:06:32.603 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.603 EAL: request: mp_malloc_sync 00:06:32.603 EAL: No shared files mode enabled, IPC is disabled 00:06:32.603 EAL: Heap on socket 0 was expanded by 66MB 00:06:32.603 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.603 EAL: request: mp_malloc_sync 00:06:32.603 EAL: No shared files mode enabled, IPC is disabled 00:06:32.603 EAL: Heap on socket 0 was shrunk by 66MB 00:06:32.862 EAL: Trying to obtain current memory policy. 00:06:32.862 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:32.862 EAL: Restoring previous memory policy: 4 00:06:32.862 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.862 EAL: request: mp_malloc_sync 00:06:32.862 EAL: No shared files mode enabled, IPC is disabled 00:06:32.862 EAL: Heap on socket 0 was expanded by 130MB 00:06:33.121 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.380 EAL: request: mp_malloc_sync 00:06:33.380 EAL: No shared files mode enabled, IPC is disabled 00:06:33.380 EAL: Heap on socket 0 was shrunk by 130MB 00:06:33.640 EAL: Trying to obtain current memory policy. 00:06:33.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:33.640 EAL: Restoring previous memory policy: 4 00:06:33.640 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.640 EAL: request: mp_malloc_sync 00:06:33.640 EAL: No shared files mode enabled, IPC is disabled 00:06:33.640 EAL: Heap on socket 0 was expanded by 258MB 00:06:34.209 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.468 EAL: request: mp_malloc_sync 00:06:34.468 EAL: No shared files mode enabled, IPC is disabled 00:06:34.468 EAL: Heap on socket 0 was shrunk by 258MB 00:06:35.033 EAL: Trying to obtain current memory policy. 00:06:35.033 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:35.033 EAL: Restoring previous memory policy: 4 00:06:35.033 EAL: Calling mem event callback 'spdk:(nil)' 00:06:35.033 EAL: request: mp_malloc_sync 00:06:35.033 EAL: No shared files mode enabled, IPC is disabled 00:06:35.033 EAL: Heap on socket 0 was expanded by 514MB 00:06:36.412 EAL: Calling mem event callback 'spdk:(nil)' 00:06:36.671 EAL: request: mp_malloc_sync 00:06:36.671 EAL: No shared files mode enabled, IPC is disabled 00:06:36.671 EAL: Heap on socket 0 was shrunk by 514MB 00:06:37.608 EAL: Trying to obtain current memory policy. 00:06:37.608 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.867 EAL: Restoring previous memory policy: 4 00:06:37.867 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.867 EAL: request: mp_malloc_sync 00:06:37.867 EAL: No shared files mode enabled, IPC is disabled 00:06:37.867 EAL: Heap on socket 0 was expanded by 1026MB 00:06:41.167 EAL: Calling mem event callback 'spdk:(nil)' 00:06:41.167 EAL: request: mp_malloc_sync 00:06:41.167 EAL: No shared files mode enabled, IPC is disabled 00:06:41.167 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:43.124 passed 00:06:43.124 00:06:43.124 Run Summary: Type Total Ran Passed Failed Inactive 00:06:43.124 suites 1 1 n/a 0 0 00:06:43.124 tests 2 2 2 0 0 00:06:43.124 asserts 6454 6454 6454 0 n/a 00:06:43.124 00:06:43.124 Elapsed time = 11.313 seconds 00:06:43.124 EAL: No shared files mode enabled, IPC is disabled 00:06:43.124 EAL: No shared files mode enabled, IPC is disabled 00:06:43.124 EAL: No shared files mode enabled, IPC is disabled 00:06:43.124 00:06:43.124 real 0m11.705s 00:06:43.124 user 0m10.621s 00:06:43.124 sys 0m1.004s 00:06:43.124 04:01:51 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.124 04:01:51 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:43.124 ************************************ 00:06:43.124 END TEST env_vtophys 00:06:43.124 ************************************ 00:06:43.124 04:01:51 env -- common/autotest_common.sh@1142 -- # return 0 00:06:43.124 04:01:51 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:06:43.124 04:01:51 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.124 04:01:51 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.124 04:01:51 env -- common/autotest_common.sh@10 -- # set +x 00:06:43.124 ************************************ 00:06:43.124 START TEST env_pci 00:06:43.124 ************************************ 00:06:43.124 04:01:51 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:06:43.383 00:06:43.383 00:06:43.383 CUnit - A unit testing framework for C - Version 2.1-3 00:06:43.383 http://cunit.sourceforge.net/ 00:06:43.383 00:06:43.383 00:06:43.383 Suite: pci 00:06:43.383 Test: pci_hook ...[2024-07-23 04:01:51.920023] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2532453 has claimed it 00:06:43.383 EAL: Cannot find device (10000:00:01.0) 00:06:43.383 EAL: Failed to attach device on primary process 00:06:43.383 passed 00:06:43.383 00:06:43.383 Run Summary: Type Total Ran Passed Failed Inactive 00:06:43.383 suites 1 1 n/a 0 0 00:06:43.383 tests 1 1 1 0 0 00:06:43.383 asserts 25 25 25 0 n/a 00:06:43.383 00:06:43.383 Elapsed time = 0.075 seconds 00:06:43.383 00:06:43.383 real 0m0.164s 00:06:43.383 user 0m0.056s 00:06:43.383 sys 0m0.107s 00:06:43.383 04:01:52 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.383 04:01:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:43.383 ************************************ 00:06:43.383 END TEST env_pci 00:06:43.383 ************************************ 00:06:43.383 04:01:52 env -- common/autotest_common.sh@1142 -- # return 0 00:06:43.383 04:01:52 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:43.383 04:01:52 env -- env/env.sh@15 -- # uname 00:06:43.383 04:01:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:43.383 04:01:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:43.383 04:01:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:43.383 04:01:52 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:43.383 04:01:52 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.383 04:01:52 env -- common/autotest_common.sh@10 -- # set +x 00:06:43.383 ************************************ 00:06:43.383 START TEST env_dpdk_post_init 00:06:43.383 ************************************ 00:06:43.383 04:01:52 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:43.644 EAL: Detected CPU lcores: 112 00:06:43.644 EAL: Detected NUMA nodes: 2 00:06:43.644 EAL: Detected shared linkage of DPDK 00:06:43.644 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:43.644 EAL: Selected IOVA mode 'PA' 00:06:43.644 EAL: VFIO support initialized 00:06:43.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.644 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:06:43.644 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.644 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:06:43.645 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.645 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:06:43.645 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:06:43.646 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.646 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:06:43.646 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:43.646 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:06:43.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.646 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:43.646 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:43.906 EAL: Using IOMMU type 1 (Type 1) 00:06:43.906 EAL: Ignore mapping IO port bar(1) 00:06:43.906 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:06:43.906 EAL: Ignore mapping IO port bar(1) 00:06:43.906 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:06:43.906 EAL: Ignore mapping IO port bar(1) 00:06:43.906 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:06:43.906 EAL: Ignore mapping IO port bar(1) 00:06:43.906 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:06:43.906 EAL: Ignore mapping IO port bar(1) 00:06:43.906 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:06:43.906 EAL: Ignore mapping IO port bar(1) 00:06:43.906 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:06:43.906 EAL: Ignore mapping IO port bar(1) 00:06:43.906 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:06:43.906 EAL: Ignore mapping IO port bar(1) 00:06:43.906 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:43.906 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:06:43.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.906 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:43.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:06:43.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.907 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:43.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:06:43.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.907 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:43.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:06:43.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.907 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:43.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:06:43.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.907 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:43.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:06:43.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.907 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:43.907 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:06:43.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.907 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:43.907 EAL: Ignore mapping IO port bar(1) 00:06:43.907 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:06:43.907 EAL: Ignore mapping IO port bar(1) 00:06:43.907 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:06:44.166 EAL: Ignore mapping IO port bar(1) 00:06:44.166 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:06:44.166 EAL: Ignore mapping IO port bar(1) 00:06:44.166 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:06:44.166 EAL: Ignore mapping IO port bar(1) 00:06:44.166 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:06:44.166 EAL: Ignore mapping IO port bar(1) 00:06:44.166 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:06:44.166 EAL: Ignore mapping IO port bar(1) 00:06:44.166 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:06:44.166 EAL: Ignore mapping IO port bar(1) 00:06:44.166 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:06:44.733 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:06:48.926 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:06:48.926 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001120000 00:06:49.185 Starting DPDK initialization... 00:06:49.185 Starting SPDK post initialization... 00:06:49.185 SPDK NVMe probe 00:06:49.185 Attaching to 0000:d8:00.0 00:06:49.185 Attached to 0000:d8:00.0 00:06:49.185 Cleaning up... 00:06:49.185 00:06:49.185 real 0m5.690s 00:06:49.185 user 0m4.172s 00:06:49.185 sys 0m0.568s 00:06:49.185 04:01:57 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.185 04:01:57 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:49.185 ************************************ 00:06:49.185 END TEST env_dpdk_post_init 00:06:49.185 ************************************ 00:06:49.185 04:01:57 env -- common/autotest_common.sh@1142 -- # return 0 00:06:49.185 04:01:57 env -- env/env.sh@26 -- # uname 00:06:49.185 04:01:57 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:49.185 04:01:57 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:49.185 04:01:57 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.185 04:01:57 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.185 04:01:57 env -- common/autotest_common.sh@10 -- # set +x 00:06:49.185 ************************************ 00:06:49.185 START TEST env_mem_callbacks 00:06:49.185 ************************************ 00:06:49.185 04:01:57 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:49.185 EAL: Detected CPU lcores: 112 00:06:49.185 EAL: Detected NUMA nodes: 2 00:06:49.185 EAL: Detected shared linkage of DPDK 00:06:49.446 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:49.446 EAL: Selected IOVA mode 'PA' 00:06:49.446 EAL: VFIO support initialized 00:06:49.446 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:06:49.446 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:06:49.446 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.446 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:06:49.446 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.446 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:06:49.446 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:06:49.446 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.446 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:06:49.446 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.446 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:06:49.446 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:06:49.446 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.446 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:06:49.446 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.446 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:06:49.446 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:06:49.446 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.446 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:06:49.446 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.446 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:06:49.446 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:06:49.446 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.446 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:06:49.446 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.446 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:06:49.446 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:06:49.446 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.446 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:06:49.446 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.446 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:06:49.446 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:06:49.446 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.446 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:06:49.446 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.446 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:06:49.446 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:06:49.446 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.446 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.447 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:06:49.447 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.447 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.448 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:06:49.448 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:06:49.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.448 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:06:49.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.448 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:06:49.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.448 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:06:49.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.448 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:06:49.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.448 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:06:49.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.448 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:06:49.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.448 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:06:49.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.448 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:06:49.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.448 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:06:49.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.448 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:06:49.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.448 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:06:49.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.448 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:06:49.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.448 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:06:49.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.448 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:06:49.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.448 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:06:49.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.448 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:06:49.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.448 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:06:49.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.448 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:06:49.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.448 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:06:49.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.448 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:49.448 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:06:49.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.449 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:49.449 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:06:49.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.449 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:49.449 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:06:49.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.449 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:49.449 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:06:49.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.449 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:49.449 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:06:49.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.449 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:49.449 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:06:49.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.449 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:49.449 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:06:49.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.449 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:49.449 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:06:49.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.449 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:49.449 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:06:49.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.449 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:49.449 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:06:49.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.449 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:49.449 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:06:49.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.449 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:49.449 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:06:49.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.449 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:49.449 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:49.449 00:06:49.449 00:06:49.449 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.449 http://cunit.sourceforge.net/ 00:06:49.449 00:06:49.449 00:06:49.449 Suite: memory 00:06:49.449 Test: test ... 00:06:49.449 register 0x200000200000 2097152 00:06:49.449 malloc 3145728 00:06:49.449 register 0x200000400000 4194304 00:06:49.449 buf 0x2000004fffc0 len 3145728 PASSED 00:06:49.449 malloc 64 00:06:49.449 buf 0x2000004ffec0 len 64 PASSED 00:06:49.449 malloc 4194304 00:06:49.449 register 0x200000800000 6291456 00:06:49.449 buf 0x2000009fffc0 len 4194304 PASSED 00:06:49.449 free 0x2000004fffc0 3145728 00:06:49.449 free 0x2000004ffec0 64 00:06:49.449 unregister 0x200000400000 4194304 PASSED 00:06:49.449 free 0x2000009fffc0 4194304 00:06:49.449 unregister 0x200000800000 6291456 PASSED 00:06:49.449 malloc 8388608 00:06:49.449 register 0x200000400000 10485760 00:06:49.449 buf 0x2000005fffc0 len 8388608 PASSED 00:06:49.449 free 0x2000005fffc0 8388608 00:06:49.449 unregister 0x200000400000 10485760 PASSED 00:06:49.449 passed 00:06:49.449 00:06:49.449 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.449 suites 1 1 n/a 0 0 00:06:49.449 tests 1 1 1 0 0 00:06:49.449 asserts 15 15 15 0 n/a 00:06:49.449 00:06:49.449 Elapsed time = 0.089 seconds 00:06:49.449 00:06:49.449 real 0m0.297s 00:06:49.449 user 0m0.162s 00:06:49.449 sys 0m0.133s 00:06:49.449 04:01:58 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.449 04:01:58 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:49.449 ************************************ 00:06:49.449 END TEST env_mem_callbacks 00:06:49.449 ************************************ 00:06:49.708 04:01:58 env -- common/autotest_common.sh@1142 -- # return 0 00:06:49.708 00:06:49.708 real 0m19.158s 00:06:49.708 user 0m15.931s 00:06:49.708 sys 0m2.230s 00:06:49.708 04:01:58 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.708 04:01:58 env -- common/autotest_common.sh@10 -- # set +x 00:06:49.708 ************************************ 00:06:49.708 END TEST env 00:06:49.708 ************************************ 00:06:49.708 04:01:58 -- common/autotest_common.sh@1142 -- # return 0 00:06:49.708 04:01:58 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:06:49.708 04:01:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.708 04:01:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.708 04:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:49.708 ************************************ 00:06:49.708 START TEST rpc 00:06:49.708 ************************************ 00:06:49.708 04:01:58 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:06:49.708 * Looking for test storage... 00:06:49.708 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:49.708 04:01:58 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2533639 00:06:49.708 04:01:58 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:49.708 04:01:58 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:49.708 04:01:58 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2533639 00:06:49.708 04:01:58 rpc -- common/autotest_common.sh@829 -- # '[' -z 2533639 ']' 00:06:49.708 04:01:58 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.708 04:01:58 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.709 04:01:58 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.709 04:01:58 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.709 04:01:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.968 [2024-07-23 04:01:58.542012] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:49.968 [2024-07-23 04:01:58.542102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2533639 ] 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:49.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.968 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:49.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.969 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:49.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.969 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:49.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.969 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:49.969 [2024-07-23 04:01:58.727471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.228 [2024-07-23 04:01:58.986746] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:50.228 [2024-07-23 04:01:58.986805] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2533639' to capture a snapshot of events at runtime. 00:06:50.228 [2024-07-23 04:01:58.986823] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:50.228 [2024-07-23 04:01:58.986841] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:50.228 [2024-07-23 04:01:58.986854] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2533639 for offline analysis/debug. 00:06:50.228 [2024-07-23 04:01:58.986900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.607 04:02:00 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.608 04:02:00 rpc -- common/autotest_common.sh@862 -- # return 0 00:06:51.608 04:02:00 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:51.608 04:02:00 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:51.608 04:02:00 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:51.608 04:02:00 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:51.608 04:02:00 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.608 04:02:00 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.608 04:02:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.608 ************************************ 00:06:51.608 START TEST rpc_integrity 00:06:51.608 ************************************ 00:06:51.608 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:51.608 04:02:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:51.608 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.608 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.608 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.608 04:02:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:51.608 04:02:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:51.608 04:02:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:51.608 04:02:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:51.608 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.608 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.608 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.608 04:02:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:51.608 04:02:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:51.608 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.608 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.608 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.608 04:02:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:51.608 { 00:06:51.608 "name": "Malloc0", 00:06:51.608 "aliases": [ 00:06:51.608 "e64c329c-9c6d-48bb-ad05-622be4eea398" 00:06:51.608 ], 00:06:51.608 "product_name": "Malloc disk", 00:06:51.608 "block_size": 512, 00:06:51.608 "num_blocks": 16384, 00:06:51.608 "uuid": "e64c329c-9c6d-48bb-ad05-622be4eea398", 00:06:51.608 "assigned_rate_limits": { 00:06:51.608 "rw_ios_per_sec": 0, 00:06:51.608 "rw_mbytes_per_sec": 0, 00:06:51.608 "r_mbytes_per_sec": 0, 00:06:51.608 "w_mbytes_per_sec": 0 00:06:51.608 }, 00:06:51.608 "claimed": false, 00:06:51.608 "zoned": false, 00:06:51.608 "supported_io_types": { 00:06:51.608 "read": true, 00:06:51.608 "write": true, 00:06:51.608 "unmap": true, 00:06:51.608 "flush": true, 00:06:51.608 "reset": true, 00:06:51.608 "nvme_admin": false, 00:06:51.608 "nvme_io": false, 00:06:51.608 "nvme_io_md": false, 00:06:51.608 "write_zeroes": true, 00:06:51.608 "zcopy": true, 00:06:51.608 "get_zone_info": false, 00:06:51.608 "zone_management": false, 00:06:51.608 "zone_append": false, 00:06:51.608 "compare": false, 00:06:51.608 "compare_and_write": false, 00:06:51.608 "abort": true, 00:06:51.608 "seek_hole": false, 00:06:51.608 "seek_data": false, 00:06:51.608 "copy": true, 00:06:51.608 "nvme_iov_md": false 00:06:51.608 }, 00:06:51.608 "memory_domains": [ 00:06:51.608 { 00:06:51.608 "dma_device_id": "system", 00:06:51.608 "dma_device_type": 1 00:06:51.608 }, 00:06:51.608 { 00:06:51.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.608 "dma_device_type": 2 00:06:51.608 } 00:06:51.608 ], 00:06:51.608 "driver_specific": {} 00:06:51.608 } 00:06:51.608 ]' 00:06:51.608 04:02:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:51.608 04:02:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:51.608 04:02:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:51.608 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.608 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.608 [2024-07-23 04:02:00.355251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:51.608 [2024-07-23 04:02:00.355326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:51.608 [2024-07-23 04:02:00.355356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003fc80 00:06:51.608 [2024-07-23 04:02:00.355375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:51.608 [2024-07-23 04:02:00.358165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:51.608 [2024-07-23 04:02:00.358215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:51.608 Passthru0 00:06:51.608 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.608 04:02:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:51.608 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.608 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.868 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.868 04:02:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:51.868 { 00:06:51.868 "name": "Malloc0", 00:06:51.868 "aliases": [ 00:06:51.868 "e64c329c-9c6d-48bb-ad05-622be4eea398" 00:06:51.868 ], 00:06:51.868 "product_name": "Malloc disk", 00:06:51.868 "block_size": 512, 00:06:51.868 "num_blocks": 16384, 00:06:51.868 "uuid": "e64c329c-9c6d-48bb-ad05-622be4eea398", 00:06:51.868 "assigned_rate_limits": { 00:06:51.868 "rw_ios_per_sec": 0, 00:06:51.868 "rw_mbytes_per_sec": 0, 00:06:51.868 "r_mbytes_per_sec": 0, 00:06:51.868 "w_mbytes_per_sec": 0 00:06:51.868 }, 00:06:51.868 "claimed": true, 00:06:51.868 "claim_type": "exclusive_write", 00:06:51.868 "zoned": false, 00:06:51.868 "supported_io_types": { 00:06:51.868 "read": true, 00:06:51.868 "write": true, 00:06:51.868 "unmap": true, 00:06:51.868 "flush": true, 00:06:51.868 "reset": true, 00:06:51.868 "nvme_admin": false, 00:06:51.868 "nvme_io": false, 00:06:51.868 "nvme_io_md": false, 00:06:51.868 "write_zeroes": true, 00:06:51.868 "zcopy": true, 00:06:51.868 "get_zone_info": false, 00:06:51.868 "zone_management": false, 00:06:51.868 "zone_append": false, 00:06:51.868 "compare": false, 00:06:51.868 "compare_and_write": false, 00:06:51.868 "abort": true, 00:06:51.868 "seek_hole": false, 00:06:51.868 "seek_data": false, 00:06:51.868 "copy": true, 00:06:51.868 "nvme_iov_md": false 00:06:51.868 }, 00:06:51.868 "memory_domains": [ 00:06:51.868 { 00:06:51.868 "dma_device_id": "system", 00:06:51.868 "dma_device_type": 1 00:06:51.868 }, 00:06:51.868 { 00:06:51.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.868 "dma_device_type": 2 00:06:51.868 } 00:06:51.868 ], 00:06:51.868 "driver_specific": {} 00:06:51.868 }, 00:06:51.868 { 00:06:51.868 "name": "Passthru0", 00:06:51.868 "aliases": [ 00:06:51.868 "2e35d960-7778-5a43-b01f-617391775c22" 00:06:51.868 ], 00:06:51.868 "product_name": "passthru", 00:06:51.868 "block_size": 512, 00:06:51.868 "num_blocks": 16384, 00:06:51.868 "uuid": "2e35d960-7778-5a43-b01f-617391775c22", 00:06:51.868 "assigned_rate_limits": { 00:06:51.868 "rw_ios_per_sec": 0, 00:06:51.868 "rw_mbytes_per_sec": 0, 00:06:51.868 "r_mbytes_per_sec": 0, 00:06:51.868 "w_mbytes_per_sec": 0 00:06:51.868 }, 00:06:51.868 "claimed": false, 00:06:51.868 "zoned": false, 00:06:51.868 "supported_io_types": { 00:06:51.868 "read": true, 00:06:51.868 "write": true, 00:06:51.868 "unmap": true, 00:06:51.868 "flush": true, 00:06:51.868 "reset": true, 00:06:51.868 "nvme_admin": false, 00:06:51.868 "nvme_io": false, 00:06:51.868 "nvme_io_md": false, 00:06:51.868 "write_zeroes": true, 00:06:51.868 "zcopy": true, 00:06:51.868 "get_zone_info": false, 00:06:51.868 "zone_management": false, 00:06:51.868 "zone_append": false, 00:06:51.868 "compare": false, 00:06:51.868 "compare_and_write": false, 00:06:51.868 "abort": true, 00:06:51.868 "seek_hole": false, 00:06:51.868 "seek_data": false, 00:06:51.868 "copy": true, 00:06:51.868 "nvme_iov_md": false 00:06:51.868 }, 00:06:51.868 "memory_domains": [ 00:06:51.868 { 00:06:51.868 "dma_device_id": "system", 00:06:51.868 "dma_device_type": 1 00:06:51.868 }, 00:06:51.868 { 00:06:51.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.868 "dma_device_type": 2 00:06:51.868 } 00:06:51.868 ], 00:06:51.868 "driver_specific": { 00:06:51.868 "passthru": { 00:06:51.868 "name": "Passthru0", 00:06:51.868 "base_bdev_name": "Malloc0" 00:06:51.868 } 00:06:51.868 } 00:06:51.868 } 00:06:51.868 ]' 00:06:51.868 04:02:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:51.868 04:02:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:51.868 04:02:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:51.868 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.868 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.868 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.868 04:02:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:51.868 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.868 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.868 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.868 04:02:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:51.868 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.868 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.868 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.868 04:02:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:51.868 04:02:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:51.868 04:02:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:51.868 00:06:51.868 real 0m0.316s 00:06:51.868 user 0m0.188s 00:06:51.868 sys 0m0.053s 00:06:51.868 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.868 04:02:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.868 ************************************ 00:06:51.868 END TEST rpc_integrity 00:06:51.868 ************************************ 00:06:51.868 04:02:00 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:51.868 04:02:00 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:51.868 04:02:00 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.868 04:02:00 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.868 04:02:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.868 ************************************ 00:06:51.868 START TEST rpc_plugins 00:06:51.868 ************************************ 00:06:51.868 04:02:00 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:06:51.868 04:02:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:51.868 04:02:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.868 04:02:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:51.868 04:02:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.868 04:02:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:51.868 04:02:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:51.868 04:02:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.868 04:02:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:51.868 04:02:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.868 04:02:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:51.868 { 00:06:51.868 "name": "Malloc1", 00:06:51.868 "aliases": [ 00:06:51.868 "3c6aa560-f4c9-40d0-860d-270bd27decea" 00:06:51.868 ], 00:06:51.868 "product_name": "Malloc disk", 00:06:51.868 "block_size": 4096, 00:06:51.868 "num_blocks": 256, 00:06:51.868 "uuid": "3c6aa560-f4c9-40d0-860d-270bd27decea", 00:06:51.868 "assigned_rate_limits": { 00:06:51.868 "rw_ios_per_sec": 0, 00:06:51.868 "rw_mbytes_per_sec": 0, 00:06:51.868 "r_mbytes_per_sec": 0, 00:06:51.868 "w_mbytes_per_sec": 0 00:06:51.868 }, 00:06:51.868 "claimed": false, 00:06:51.868 "zoned": false, 00:06:51.868 "supported_io_types": { 00:06:51.868 "read": true, 00:06:51.868 "write": true, 00:06:51.868 "unmap": true, 00:06:51.868 "flush": true, 00:06:51.868 "reset": true, 00:06:51.868 "nvme_admin": false, 00:06:51.868 "nvme_io": false, 00:06:51.868 "nvme_io_md": false, 00:06:51.868 "write_zeroes": true, 00:06:51.868 "zcopy": true, 00:06:51.868 "get_zone_info": false, 00:06:51.868 "zone_management": false, 00:06:51.868 "zone_append": false, 00:06:51.868 "compare": false, 00:06:51.868 "compare_and_write": false, 00:06:51.868 "abort": true, 00:06:51.868 "seek_hole": false, 00:06:51.868 "seek_data": false, 00:06:51.868 "copy": true, 00:06:51.868 "nvme_iov_md": false 00:06:51.868 }, 00:06:51.868 "memory_domains": [ 00:06:51.868 { 00:06:51.868 "dma_device_id": "system", 00:06:51.868 "dma_device_type": 1 00:06:51.868 }, 00:06:51.868 { 00:06:51.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.868 "dma_device_type": 2 00:06:51.868 } 00:06:51.869 ], 00:06:51.869 "driver_specific": {} 00:06:51.869 } 00:06:51.869 ]' 00:06:51.869 04:02:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:52.128 04:02:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:52.128 04:02:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:52.128 04:02:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.128 04:02:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:52.128 04:02:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.128 04:02:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:52.128 04:02:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.128 04:02:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:52.128 04:02:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.128 04:02:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:52.128 04:02:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:52.128 04:02:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:52.128 00:06:52.128 real 0m0.144s 00:06:52.128 user 0m0.098s 00:06:52.128 sys 0m0.018s 00:06:52.128 04:02:00 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.128 04:02:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:52.128 ************************************ 00:06:52.128 END TEST rpc_plugins 00:06:52.128 ************************************ 00:06:52.128 04:02:00 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:52.128 04:02:00 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:52.128 04:02:00 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.128 04:02:00 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.128 04:02:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.128 ************************************ 00:06:52.128 START TEST rpc_trace_cmd_test 00:06:52.128 ************************************ 00:06:52.128 04:02:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:06:52.128 04:02:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:52.128 04:02:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:52.128 04:02:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.128 04:02:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.128 04:02:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.128 04:02:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:52.128 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2533639", 00:06:52.128 "tpoint_group_mask": "0x8", 00:06:52.128 "iscsi_conn": { 00:06:52.128 "mask": "0x2", 00:06:52.128 "tpoint_mask": "0x0" 00:06:52.128 }, 00:06:52.128 "scsi": { 00:06:52.128 "mask": "0x4", 00:06:52.128 "tpoint_mask": "0x0" 00:06:52.128 }, 00:06:52.128 "bdev": { 00:06:52.128 "mask": "0x8", 00:06:52.128 "tpoint_mask": "0xffffffffffffffff" 00:06:52.128 }, 00:06:52.128 "nvmf_rdma": { 00:06:52.128 "mask": "0x10", 00:06:52.128 "tpoint_mask": "0x0" 00:06:52.128 }, 00:06:52.128 "nvmf_tcp": { 00:06:52.128 "mask": "0x20", 00:06:52.128 "tpoint_mask": "0x0" 00:06:52.128 }, 00:06:52.128 "ftl": { 00:06:52.128 "mask": "0x40", 00:06:52.128 "tpoint_mask": "0x0" 00:06:52.128 }, 00:06:52.128 "blobfs": { 00:06:52.128 "mask": "0x80", 00:06:52.128 "tpoint_mask": "0x0" 00:06:52.128 }, 00:06:52.128 "dsa": { 00:06:52.128 "mask": "0x200", 00:06:52.128 "tpoint_mask": "0x0" 00:06:52.128 }, 00:06:52.128 "thread": { 00:06:52.128 "mask": "0x400", 00:06:52.128 "tpoint_mask": "0x0" 00:06:52.128 }, 00:06:52.128 "nvme_pcie": { 00:06:52.128 "mask": "0x800", 00:06:52.128 "tpoint_mask": "0x0" 00:06:52.128 }, 00:06:52.128 "iaa": { 00:06:52.128 "mask": "0x1000", 00:06:52.128 "tpoint_mask": "0x0" 00:06:52.128 }, 00:06:52.128 "nvme_tcp": { 00:06:52.128 "mask": "0x2000", 00:06:52.128 "tpoint_mask": "0x0" 00:06:52.128 }, 00:06:52.128 "bdev_nvme": { 00:06:52.128 "mask": "0x4000", 00:06:52.128 "tpoint_mask": "0x0" 00:06:52.128 }, 00:06:52.128 "sock": { 00:06:52.128 "mask": "0x8000", 00:06:52.128 "tpoint_mask": "0x0" 00:06:52.128 } 00:06:52.128 }' 00:06:52.128 04:02:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:52.128 04:02:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:52.128 04:02:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:52.387 04:02:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:52.387 04:02:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:52.387 04:02:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:52.387 04:02:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:52.387 04:02:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:52.387 04:02:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:52.387 04:02:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:52.387 00:06:52.387 real 0m0.218s 00:06:52.387 user 0m0.181s 00:06:52.387 sys 0m0.029s 00:06:52.387 04:02:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.387 04:02:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.387 ************************************ 00:06:52.387 END TEST rpc_trace_cmd_test 00:06:52.388 ************************************ 00:06:52.388 04:02:01 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:52.388 04:02:01 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:52.388 04:02:01 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:52.388 04:02:01 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:52.388 04:02:01 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.388 04:02:01 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.388 04:02:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.388 ************************************ 00:06:52.388 START TEST rpc_daemon_integrity 00:06:52.388 ************************************ 00:06:52.388 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:52.388 04:02:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:52.388 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.388 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:52.388 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.388 04:02:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:52.388 04:02:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:52.388 04:02:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:52.388 04:02:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:52.388 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.388 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:52.647 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.647 04:02:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:52.647 04:02:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:52.647 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.647 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:52.647 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.647 04:02:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:52.647 { 00:06:52.647 "name": "Malloc2", 00:06:52.647 "aliases": [ 00:06:52.647 "6ed71538-f72d-4fea-a378-bc29e6ec629e" 00:06:52.647 ], 00:06:52.647 "product_name": "Malloc disk", 00:06:52.647 "block_size": 512, 00:06:52.647 "num_blocks": 16384, 00:06:52.647 "uuid": "6ed71538-f72d-4fea-a378-bc29e6ec629e", 00:06:52.647 "assigned_rate_limits": { 00:06:52.647 "rw_ios_per_sec": 0, 00:06:52.647 "rw_mbytes_per_sec": 0, 00:06:52.647 "r_mbytes_per_sec": 0, 00:06:52.647 "w_mbytes_per_sec": 0 00:06:52.647 }, 00:06:52.647 "claimed": false, 00:06:52.647 "zoned": false, 00:06:52.647 "supported_io_types": { 00:06:52.647 "read": true, 00:06:52.647 "write": true, 00:06:52.647 "unmap": true, 00:06:52.647 "flush": true, 00:06:52.647 "reset": true, 00:06:52.647 "nvme_admin": false, 00:06:52.647 "nvme_io": false, 00:06:52.647 "nvme_io_md": false, 00:06:52.647 "write_zeroes": true, 00:06:52.647 "zcopy": true, 00:06:52.647 "get_zone_info": false, 00:06:52.647 "zone_management": false, 00:06:52.647 "zone_append": false, 00:06:52.647 "compare": false, 00:06:52.647 "compare_and_write": false, 00:06:52.647 "abort": true, 00:06:52.647 "seek_hole": false, 00:06:52.647 "seek_data": false, 00:06:52.647 "copy": true, 00:06:52.647 "nvme_iov_md": false 00:06:52.647 }, 00:06:52.647 "memory_domains": [ 00:06:52.647 { 00:06:52.647 "dma_device_id": "system", 00:06:52.647 "dma_device_type": 1 00:06:52.647 }, 00:06:52.647 { 00:06:52.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.647 "dma_device_type": 2 00:06:52.647 } 00:06:52.647 ], 00:06:52.647 "driver_specific": {} 00:06:52.647 } 00:06:52.647 ]' 00:06:52.647 04:02:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:52.647 04:02:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:52.647 04:02:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:52.647 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.647 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:52.647 [2024-07-23 04:02:01.257592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:52.647 [2024-07-23 04:02:01.257654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:52.648 [2024-07-23 04:02:01.257679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:06:52.648 [2024-07-23 04:02:01.257698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:52.648 [2024-07-23 04:02:01.260419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:52.648 [2024-07-23 04:02:01.260455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:52.648 Passthru0 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:52.648 { 00:06:52.648 "name": "Malloc2", 00:06:52.648 "aliases": [ 00:06:52.648 "6ed71538-f72d-4fea-a378-bc29e6ec629e" 00:06:52.648 ], 00:06:52.648 "product_name": "Malloc disk", 00:06:52.648 "block_size": 512, 00:06:52.648 "num_blocks": 16384, 00:06:52.648 "uuid": "6ed71538-f72d-4fea-a378-bc29e6ec629e", 00:06:52.648 "assigned_rate_limits": { 00:06:52.648 "rw_ios_per_sec": 0, 00:06:52.648 "rw_mbytes_per_sec": 0, 00:06:52.648 "r_mbytes_per_sec": 0, 00:06:52.648 "w_mbytes_per_sec": 0 00:06:52.648 }, 00:06:52.648 "claimed": true, 00:06:52.648 "claim_type": "exclusive_write", 00:06:52.648 "zoned": false, 00:06:52.648 "supported_io_types": { 00:06:52.648 "read": true, 00:06:52.648 "write": true, 00:06:52.648 "unmap": true, 00:06:52.648 "flush": true, 00:06:52.648 "reset": true, 00:06:52.648 "nvme_admin": false, 00:06:52.648 "nvme_io": false, 00:06:52.648 "nvme_io_md": false, 00:06:52.648 "write_zeroes": true, 00:06:52.648 "zcopy": true, 00:06:52.648 "get_zone_info": false, 00:06:52.648 "zone_management": false, 00:06:52.648 "zone_append": false, 00:06:52.648 "compare": false, 00:06:52.648 "compare_and_write": false, 00:06:52.648 "abort": true, 00:06:52.648 "seek_hole": false, 00:06:52.648 "seek_data": false, 00:06:52.648 "copy": true, 00:06:52.648 "nvme_iov_md": false 00:06:52.648 }, 00:06:52.648 "memory_domains": [ 00:06:52.648 { 00:06:52.648 "dma_device_id": "system", 00:06:52.648 "dma_device_type": 1 00:06:52.648 }, 00:06:52.648 { 00:06:52.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.648 "dma_device_type": 2 00:06:52.648 } 00:06:52.648 ], 00:06:52.648 "driver_specific": {} 00:06:52.648 }, 00:06:52.648 { 00:06:52.648 "name": "Passthru0", 00:06:52.648 "aliases": [ 00:06:52.648 "1cf63372-e0a0-51fa-bd22-3eba29152211" 00:06:52.648 ], 00:06:52.648 "product_name": "passthru", 00:06:52.648 "block_size": 512, 00:06:52.648 "num_blocks": 16384, 00:06:52.648 "uuid": "1cf63372-e0a0-51fa-bd22-3eba29152211", 00:06:52.648 "assigned_rate_limits": { 00:06:52.648 "rw_ios_per_sec": 0, 00:06:52.648 "rw_mbytes_per_sec": 0, 00:06:52.648 "r_mbytes_per_sec": 0, 00:06:52.648 "w_mbytes_per_sec": 0 00:06:52.648 }, 00:06:52.648 "claimed": false, 00:06:52.648 "zoned": false, 00:06:52.648 "supported_io_types": { 00:06:52.648 "read": true, 00:06:52.648 "write": true, 00:06:52.648 "unmap": true, 00:06:52.648 "flush": true, 00:06:52.648 "reset": true, 00:06:52.648 "nvme_admin": false, 00:06:52.648 "nvme_io": false, 00:06:52.648 "nvme_io_md": false, 00:06:52.648 "write_zeroes": true, 00:06:52.648 "zcopy": true, 00:06:52.648 "get_zone_info": false, 00:06:52.648 "zone_management": false, 00:06:52.648 "zone_append": false, 00:06:52.648 "compare": false, 00:06:52.648 "compare_and_write": false, 00:06:52.648 "abort": true, 00:06:52.648 "seek_hole": false, 00:06:52.648 "seek_data": false, 00:06:52.648 "copy": true, 00:06:52.648 "nvme_iov_md": false 00:06:52.648 }, 00:06:52.648 "memory_domains": [ 00:06:52.648 { 00:06:52.648 "dma_device_id": "system", 00:06:52.648 "dma_device_type": 1 00:06:52.648 }, 00:06:52.648 { 00:06:52.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.648 "dma_device_type": 2 00:06:52.648 } 00:06:52.648 ], 00:06:52.648 "driver_specific": { 00:06:52.648 "passthru": { 00:06:52.648 "name": "Passthru0", 00:06:52.648 "base_bdev_name": "Malloc2" 00:06:52.648 } 00:06:52.648 } 00:06:52.648 } 00:06:52.648 ]' 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:52.648 00:06:52.648 real 0m0.303s 00:06:52.648 user 0m0.171s 00:06:52.648 sys 0m0.065s 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.648 04:02:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:52.648 ************************************ 00:06:52.648 END TEST rpc_daemon_integrity 00:06:52.648 ************************************ 00:06:52.907 04:02:01 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:52.907 04:02:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:52.907 04:02:01 rpc -- rpc/rpc.sh@84 -- # killprocess 2533639 00:06:52.907 04:02:01 rpc -- common/autotest_common.sh@948 -- # '[' -z 2533639 ']' 00:06:52.907 04:02:01 rpc -- common/autotest_common.sh@952 -- # kill -0 2533639 00:06:52.907 04:02:01 rpc -- common/autotest_common.sh@953 -- # uname 00:06:52.907 04:02:01 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:52.907 04:02:01 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2533639 00:06:52.907 04:02:01 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:52.907 04:02:01 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:52.907 04:02:01 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2533639' 00:06:52.907 killing process with pid 2533639 00:06:52.907 04:02:01 rpc -- common/autotest_common.sh@967 -- # kill 2533639 00:06:52.907 04:02:01 rpc -- common/autotest_common.sh@972 -- # wait 2533639 00:06:56.197 00:06:56.197 real 0m6.446s 00:06:56.197 user 0m6.966s 00:06:56.197 sys 0m1.081s 00:06:56.197 04:02:04 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.197 04:02:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.197 ************************************ 00:06:56.197 END TEST rpc 00:06:56.197 ************************************ 00:06:56.197 04:02:04 -- common/autotest_common.sh@1142 -- # return 0 00:06:56.197 04:02:04 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:56.197 04:02:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.197 04:02:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.197 04:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:56.197 ************************************ 00:06:56.197 START TEST skip_rpc 00:06:56.197 ************************************ 00:06:56.197 04:02:04 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:56.197 * Looking for test storage... 00:06:56.197 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:56.197 04:02:04 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:06:56.197 04:02:04 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:06:56.197 04:02:04 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:56.197 04:02:04 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.197 04:02:04 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.197 04:02:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.457 ************************************ 00:06:56.457 START TEST skip_rpc 00:06:56.457 ************************************ 00:06:56.457 04:02:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:06:56.457 04:02:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2534883 00:06:56.457 04:02:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:56.457 04:02:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:56.457 04:02:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:56.457 [2024-07-23 04:02:05.114707] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:56.457 [2024-07-23 04:02:05.114816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2534883 ] 00:06:56.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.716 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:56.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.716 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:56.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.716 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:56.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.716 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:56.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.716 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:56.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.716 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:56.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.716 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:56.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.716 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:56.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.716 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:56.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.716 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:56.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.717 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:56.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.717 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:56.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.717 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:56.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.717 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:56.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.717 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:56.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.717 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:56.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.717 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:56.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.717 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:56.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.717 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:56.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.717 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:56.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.717 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:56.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.717 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:56.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.717 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:56.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.717 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:56.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.717 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:56.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.717 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:56.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.717 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:56.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.717 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:56.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.717 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:56.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.717 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:56.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.717 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:56.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.717 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:56.717 [2024-07-23 04:02:05.343137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.976 [2024-07-23 04:02:05.626668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.249 04:02:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2534883 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2534883 ']' 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2534883 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2534883 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2534883' 00:07:02.249 killing process with pid 2534883 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2534883 00:07:02.249 04:02:10 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2534883 00:07:04.786 00:07:04.786 real 0m8.271s 00:07:04.786 user 0m7.744s 00:07:04.786 sys 0m0.530s 00:07:04.786 04:02:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.786 04:02:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.786 ************************************ 00:07:04.786 END TEST skip_rpc 00:07:04.786 ************************************ 00:07:04.786 04:02:13 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:04.786 04:02:13 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:04.786 04:02:13 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:04.786 04:02:13 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.786 04:02:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.786 ************************************ 00:07:04.786 START TEST skip_rpc_with_json 00:07:04.786 ************************************ 00:07:04.786 04:02:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:07:04.786 04:02:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:04.786 04:02:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2536473 00:07:04.786 04:02:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:04.786 04:02:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:04.786 04:02:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2536473 00:07:04.786 04:02:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2536473 ']' 00:07:04.786 04:02:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.786 04:02:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.786 04:02:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.786 04:02:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.786 04:02:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:04.786 [2024-07-23 04:02:13.473552] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:04.786 [2024-07-23 04:02:13.473672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2536473 ] 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:05.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.078 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:05.078 [2024-07-23 04:02:13.700759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.338 [2024-07-23 04:02:13.980485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.717 04:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.717 04:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:07:06.717 04:02:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:06.717 04:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.717 04:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:06.717 [2024-07-23 04:02:15.220266] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:06.717 request: 00:07:06.717 { 00:07:06.717 "trtype": "tcp", 00:07:06.717 "method": "nvmf_get_transports", 00:07:06.717 "req_id": 1 00:07:06.717 } 00:07:06.717 Got JSON-RPC error response 00:07:06.717 response: 00:07:06.717 { 00:07:06.717 "code": -19, 00:07:06.717 "message": "No such device" 00:07:06.717 } 00:07:06.717 04:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:06.717 04:02:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:06.717 04:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.717 04:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:06.717 [2024-07-23 04:02:15.232409] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.717 04:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.717 04:02:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:06.717 04:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.717 04:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:06.717 04:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.717 04:02:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:07:06.717 { 00:07:06.717 "subsystems": [ 00:07:06.717 { 00:07:06.717 "subsystem": "keyring", 00:07:06.717 "config": [] 00:07:06.717 }, 00:07:06.717 { 00:07:06.717 "subsystem": "iobuf", 00:07:06.717 "config": [ 00:07:06.717 { 00:07:06.717 "method": "iobuf_set_options", 00:07:06.717 "params": { 00:07:06.717 "small_pool_count": 8192, 00:07:06.717 "large_pool_count": 1024, 00:07:06.717 "small_bufsize": 8192, 00:07:06.717 "large_bufsize": 135168 00:07:06.717 } 00:07:06.717 } 00:07:06.717 ] 00:07:06.717 }, 00:07:06.717 { 00:07:06.717 "subsystem": "sock", 00:07:06.717 "config": [ 00:07:06.717 { 00:07:06.717 "method": "sock_set_default_impl", 00:07:06.717 "params": { 00:07:06.717 "impl_name": "posix" 00:07:06.717 } 00:07:06.717 }, 00:07:06.717 { 00:07:06.717 "method": "sock_impl_set_options", 00:07:06.717 "params": { 00:07:06.717 "impl_name": "ssl", 00:07:06.717 "recv_buf_size": 4096, 00:07:06.717 "send_buf_size": 4096, 00:07:06.717 "enable_recv_pipe": true, 00:07:06.717 "enable_quickack": false, 00:07:06.717 "enable_placement_id": 0, 00:07:06.717 "enable_zerocopy_send_server": true, 00:07:06.717 "enable_zerocopy_send_client": false, 00:07:06.717 "zerocopy_threshold": 0, 00:07:06.717 "tls_version": 0, 00:07:06.717 "enable_ktls": false 00:07:06.717 } 00:07:06.717 }, 00:07:06.717 { 00:07:06.717 "method": "sock_impl_set_options", 00:07:06.717 "params": { 00:07:06.717 "impl_name": "posix", 00:07:06.717 "recv_buf_size": 2097152, 00:07:06.717 "send_buf_size": 2097152, 00:07:06.717 "enable_recv_pipe": true, 00:07:06.717 "enable_quickack": false, 00:07:06.717 "enable_placement_id": 0, 00:07:06.717 "enable_zerocopy_send_server": true, 00:07:06.717 "enable_zerocopy_send_client": false, 00:07:06.717 "zerocopy_threshold": 0, 00:07:06.717 "tls_version": 0, 00:07:06.717 "enable_ktls": false 00:07:06.717 } 00:07:06.717 } 00:07:06.717 ] 00:07:06.717 }, 00:07:06.717 { 00:07:06.717 "subsystem": "vmd", 00:07:06.717 "config": [] 00:07:06.718 }, 00:07:06.718 { 00:07:06.718 "subsystem": "accel", 00:07:06.718 "config": [ 00:07:06.718 { 00:07:06.718 "method": "accel_set_options", 00:07:06.718 "params": { 00:07:06.718 "small_cache_size": 128, 00:07:06.718 "large_cache_size": 16, 00:07:06.718 "task_count": 2048, 00:07:06.718 "sequence_count": 2048, 00:07:06.718 "buf_count": 2048 00:07:06.718 } 00:07:06.718 } 00:07:06.718 ] 00:07:06.718 }, 00:07:06.718 { 00:07:06.718 "subsystem": "bdev", 00:07:06.718 "config": [ 00:07:06.718 { 00:07:06.718 "method": "bdev_set_options", 00:07:06.718 "params": { 00:07:06.718 "bdev_io_pool_size": 65535, 00:07:06.718 "bdev_io_cache_size": 256, 00:07:06.718 "bdev_auto_examine": true, 00:07:06.718 "iobuf_small_cache_size": 128, 00:07:06.718 "iobuf_large_cache_size": 16 00:07:06.718 } 00:07:06.718 }, 00:07:06.718 { 00:07:06.718 "method": "bdev_raid_set_options", 00:07:06.718 "params": { 00:07:06.718 "process_window_size_kb": 1024, 00:07:06.718 "process_max_bandwidth_mb_sec": 0 00:07:06.718 } 00:07:06.718 }, 00:07:06.718 { 00:07:06.718 "method": "bdev_iscsi_set_options", 00:07:06.718 "params": { 00:07:06.718 "timeout_sec": 30 00:07:06.718 } 00:07:06.718 }, 00:07:06.718 { 00:07:06.718 "method": "bdev_nvme_set_options", 00:07:06.718 "params": { 00:07:06.718 "action_on_timeout": "none", 00:07:06.718 "timeout_us": 0, 00:07:06.718 "timeout_admin_us": 0, 00:07:06.718 "keep_alive_timeout_ms": 10000, 00:07:06.718 "arbitration_burst": 0, 00:07:06.718 "low_priority_weight": 0, 00:07:06.718 "medium_priority_weight": 0, 00:07:06.718 "high_priority_weight": 0, 00:07:06.718 "nvme_adminq_poll_period_us": 10000, 00:07:06.718 "nvme_ioq_poll_period_us": 0, 00:07:06.718 "io_queue_requests": 0, 00:07:06.718 "delay_cmd_submit": true, 00:07:06.718 "transport_retry_count": 4, 00:07:06.718 "bdev_retry_count": 3, 00:07:06.718 "transport_ack_timeout": 0, 00:07:06.718 "ctrlr_loss_timeout_sec": 0, 00:07:06.718 "reconnect_delay_sec": 0, 00:07:06.718 "fast_io_fail_timeout_sec": 0, 00:07:06.718 "disable_auto_failback": false, 00:07:06.718 "generate_uuids": false, 00:07:06.718 "transport_tos": 0, 00:07:06.718 "nvme_error_stat": false, 00:07:06.718 "rdma_srq_size": 0, 00:07:06.718 "io_path_stat": false, 00:07:06.718 "allow_accel_sequence": false, 00:07:06.718 "rdma_max_cq_size": 0, 00:07:06.718 "rdma_cm_event_timeout_ms": 0, 00:07:06.718 "dhchap_digests": [ 00:07:06.718 "sha256", 00:07:06.718 "sha384", 00:07:06.718 "sha512" 00:07:06.718 ], 00:07:06.718 "dhchap_dhgroups": [ 00:07:06.718 "null", 00:07:06.718 "ffdhe2048", 00:07:06.718 "ffdhe3072", 00:07:06.718 "ffdhe4096", 00:07:06.718 "ffdhe6144", 00:07:06.718 "ffdhe8192" 00:07:06.718 ] 00:07:06.718 } 00:07:06.718 }, 00:07:06.718 { 00:07:06.718 "method": "bdev_nvme_set_hotplug", 00:07:06.718 "params": { 00:07:06.718 "period_us": 100000, 00:07:06.718 "enable": false 00:07:06.718 } 00:07:06.718 }, 00:07:06.718 { 00:07:06.718 "method": "bdev_wait_for_examine" 00:07:06.718 } 00:07:06.718 ] 00:07:06.718 }, 00:07:06.718 { 00:07:06.718 "subsystem": "scsi", 00:07:06.718 "config": null 00:07:06.718 }, 00:07:06.718 { 00:07:06.718 "subsystem": "scheduler", 00:07:06.718 "config": [ 00:07:06.718 { 00:07:06.718 "method": "framework_set_scheduler", 00:07:06.718 "params": { 00:07:06.718 "name": "static" 00:07:06.718 } 00:07:06.718 } 00:07:06.718 ] 00:07:06.718 }, 00:07:06.718 { 00:07:06.718 "subsystem": "vhost_scsi", 00:07:06.718 "config": [] 00:07:06.718 }, 00:07:06.718 { 00:07:06.718 "subsystem": "vhost_blk", 00:07:06.718 "config": [] 00:07:06.718 }, 00:07:06.718 { 00:07:06.718 "subsystem": "ublk", 00:07:06.718 "config": [] 00:07:06.718 }, 00:07:06.718 { 00:07:06.718 "subsystem": "nbd", 00:07:06.718 "config": [] 00:07:06.718 }, 00:07:06.718 { 00:07:06.718 "subsystem": "nvmf", 00:07:06.718 "config": [ 00:07:06.718 { 00:07:06.718 "method": "nvmf_set_config", 00:07:06.718 "params": { 00:07:06.718 "discovery_filter": "match_any", 00:07:06.718 "admin_cmd_passthru": { 00:07:06.718 "identify_ctrlr": false 00:07:06.718 } 00:07:06.718 } 00:07:06.718 }, 00:07:06.718 { 00:07:06.718 "method": "nvmf_set_max_subsystems", 00:07:06.718 "params": { 00:07:06.718 "max_subsystems": 1024 00:07:06.718 } 00:07:06.718 }, 00:07:06.718 { 00:07:06.718 "method": "nvmf_set_crdt", 00:07:06.718 "params": { 00:07:06.718 "crdt1": 0, 00:07:06.718 "crdt2": 0, 00:07:06.718 "crdt3": 0 00:07:06.718 } 00:07:06.718 }, 00:07:06.718 { 00:07:06.718 "method": "nvmf_create_transport", 00:07:06.718 "params": { 00:07:06.718 "trtype": "TCP", 00:07:06.718 "max_queue_depth": 128, 00:07:06.718 "max_io_qpairs_per_ctrlr": 127, 00:07:06.718 "in_capsule_data_size": 4096, 00:07:06.718 "max_io_size": 131072, 00:07:06.718 "io_unit_size": 131072, 00:07:06.718 "max_aq_depth": 128, 00:07:06.718 "num_shared_buffers": 511, 00:07:06.718 "buf_cache_size": 4294967295, 00:07:06.718 "dif_insert_or_strip": false, 00:07:06.718 "zcopy": false, 00:07:06.718 "c2h_success": true, 00:07:06.718 "sock_priority": 0, 00:07:06.718 "abort_timeout_sec": 1, 00:07:06.718 "ack_timeout": 0, 00:07:06.718 "data_wr_pool_size": 0 00:07:06.718 } 00:07:06.718 } 00:07:06.718 ] 00:07:06.718 }, 00:07:06.718 { 00:07:06.718 "subsystem": "iscsi", 00:07:06.718 "config": [ 00:07:06.718 { 00:07:06.718 "method": "iscsi_set_options", 00:07:06.718 "params": { 00:07:06.718 "node_base": "iqn.2016-06.io.spdk", 00:07:06.718 "max_sessions": 128, 00:07:06.718 "max_connections_per_session": 2, 00:07:06.718 "max_queue_depth": 64, 00:07:06.718 "default_time2wait": 2, 00:07:06.718 "default_time2retain": 20, 00:07:06.718 "first_burst_length": 8192, 00:07:06.718 "immediate_data": true, 00:07:06.718 "allow_duplicated_isid": false, 00:07:06.718 "error_recovery_level": 0, 00:07:06.718 "nop_timeout": 60, 00:07:06.718 "nop_in_interval": 30, 00:07:06.718 "disable_chap": false, 00:07:06.718 "require_chap": false, 00:07:06.718 "mutual_chap": false, 00:07:06.718 "chap_group": 0, 00:07:06.718 "max_large_datain_per_connection": 64, 00:07:06.718 "max_r2t_per_connection": 4, 00:07:06.718 "pdu_pool_size": 36864, 00:07:06.718 "immediate_data_pool_size": 16384, 00:07:06.718 "data_out_pool_size": 2048 00:07:06.718 } 00:07:06.718 } 00:07:06.718 ] 00:07:06.718 } 00:07:06.718 ] 00:07:06.718 } 00:07:06.719 04:02:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:06.719 04:02:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2536473 00:07:06.719 04:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2536473 ']' 00:07:06.719 04:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2536473 00:07:06.719 04:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:07:06.719 04:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:06.719 04:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2536473 00:07:06.719 04:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:06.719 04:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:06.719 04:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2536473' 00:07:06.719 killing process with pid 2536473 00:07:06.719 04:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2536473 00:07:06.719 04:02:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2536473 00:07:10.009 04:02:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2537288 00:07:10.009 04:02:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:10.009 04:02:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:07:15.285 04:02:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2537288 00:07:15.285 04:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2537288 ']' 00:07:15.285 04:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2537288 00:07:15.285 04:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:07:15.285 04:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:15.285 04:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2537288 00:07:15.285 04:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:15.285 04:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:15.285 04:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2537288' 00:07:15.285 killing process with pid 2537288 00:07:15.285 04:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2537288 00:07:15.285 04:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2537288 00:07:18.576 04:02:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:07:18.576 04:02:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:07:18.576 00:07:18.576 real 0m13.866s 00:07:18.576 user 0m13.048s 00:07:18.576 sys 0m1.311s 00:07:18.576 04:02:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.576 04:02:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:18.576 ************************************ 00:07:18.576 END TEST skip_rpc_with_json 00:07:18.576 ************************************ 00:07:18.576 04:02:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:18.576 04:02:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:18.576 04:02:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.576 04:02:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.576 04:02:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.576 ************************************ 00:07:18.576 START TEST skip_rpc_with_delay 00:07:18.576 ************************************ 00:07:18.576 04:02:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:07:18.576 04:02:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:18.576 04:02:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:07:18.576 04:02:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:18.576 04:02:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:18.576 04:02:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.576 04:02:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:18.576 04:02:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.576 04:02:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:18.576 04:02:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.576 04:02:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:18.576 04:02:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:18.576 04:02:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:18.836 [2024-07-23 04:02:27.420962] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:18.836 [2024-07-23 04:02:27.421074] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:18.836 04:02:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:07:18.836 04:02:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.836 04:02:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:18.836 04:02:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.836 00:07:18.836 real 0m0.194s 00:07:18.836 user 0m0.095s 00:07:18.836 sys 0m0.098s 00:07:18.836 04:02:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.836 04:02:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:18.836 ************************************ 00:07:18.836 END TEST skip_rpc_with_delay 00:07:18.836 ************************************ 00:07:18.836 04:02:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:18.836 04:02:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:18.836 04:02:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:18.836 04:02:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:18.836 04:02:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.836 04:02:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.836 04:02:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.836 ************************************ 00:07:18.836 START TEST exit_on_failed_rpc_init 00:07:18.836 ************************************ 00:07:18.836 04:02:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:07:18.836 04:02:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2538920 00:07:18.836 04:02:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2538920 00:07:18.836 04:02:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:18.836 04:02:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2538920 ']' 00:07:18.836 04:02:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.836 04:02:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.836 04:02:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.836 04:02:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.836 04:02:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:19.096 [2024-07-23 04:02:27.702771] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:19.096 [2024-07-23 04:02:27.702891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2538920 ] 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:19.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.096 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:19.355 [2024-07-23 04:02:27.930194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.614 [2024-07-23 04:02:28.214337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.994 04:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.994 04:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:07:20.994 04:02:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:20.994 04:02:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:20.994 04:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:07:20.994 04:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:20.994 04:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.994 04:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.994 04:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.994 04:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.994 04:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.994 04:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.994 04:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.994 04:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:20.994 04:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:20.994 [2024-07-23 04:02:29.517745] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:20.994 [2024-07-23 04:02:29.517861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2539189 ] 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:20.994 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.994 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:20.994 [2024-07-23 04:02:29.730665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.253 [2024-07-23 04:02:30.007315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.253 [2024-07-23 04:02:30.007430] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:21.253 [2024-07-23 04:02:30.007452] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:21.253 [2024-07-23 04:02:30.007473] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.821 04:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:07:21.821 04:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.821 04:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:07:21.821 04:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:07:21.821 04:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:07:21.821 04:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.821 04:02:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:21.821 04:02:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2538920 00:07:21.821 04:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2538920 ']' 00:07:21.821 04:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2538920 00:07:21.821 04:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:07:21.821 04:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:21.821 04:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2538920 00:07:22.079 04:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:22.079 04:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:22.079 04:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2538920' 00:07:22.079 killing process with pid 2538920 00:07:22.079 04:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2538920 00:07:22.079 04:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2538920 00:07:25.366 00:07:25.366 real 0m6.330s 00:07:25.366 user 0m7.060s 00:07:25.366 sys 0m0.902s 00:07:25.366 04:02:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.366 04:02:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:25.366 ************************************ 00:07:25.366 END TEST exit_on_failed_rpc_init 00:07:25.366 ************************************ 00:07:25.366 04:02:33 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:25.366 04:02:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:07:25.366 00:07:25.366 real 0m29.091s 00:07:25.366 user 0m28.087s 00:07:25.366 sys 0m3.166s 00:07:25.366 04:02:33 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.366 04:02:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.366 ************************************ 00:07:25.366 END TEST skip_rpc 00:07:25.366 ************************************ 00:07:25.366 04:02:33 -- common/autotest_common.sh@1142 -- # return 0 00:07:25.366 04:02:33 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:25.366 04:02:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.366 04:02:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.366 04:02:33 -- common/autotest_common.sh@10 -- # set +x 00:07:25.366 ************************************ 00:07:25.366 START TEST rpc_client 00:07:25.366 ************************************ 00:07:25.366 04:02:34 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:25.366 * Looking for test storage... 00:07:25.366 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client 00:07:25.366 04:02:34 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:25.624 OK 00:07:25.624 04:02:34 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:25.624 00:07:25.624 real 0m0.178s 00:07:25.624 user 0m0.066s 00:07:25.624 sys 0m0.123s 00:07:25.624 04:02:34 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.624 04:02:34 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:25.624 ************************************ 00:07:25.624 END TEST rpc_client 00:07:25.624 ************************************ 00:07:25.624 04:02:34 -- common/autotest_common.sh@1142 -- # return 0 00:07:25.624 04:02:34 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:07:25.624 04:02:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.624 04:02:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.624 04:02:34 -- common/autotest_common.sh@10 -- # set +x 00:07:25.624 ************************************ 00:07:25.624 START TEST json_config 00:07:25.624 ************************************ 00:07:25.624 04:02:34 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:07:25.625 04:02:34 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:07:25.625 04:02:34 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.625 04:02:34 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.625 04:02:34 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.625 04:02:34 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.625 04:02:34 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.625 04:02:34 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.625 04:02:34 json_config -- paths/export.sh@5 -- # export PATH 00:07:25.625 04:02:34 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@47 -- # : 0 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:25.625 04:02:34 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:25.625 04:02:34 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:07:25.625 04:02:34 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:25.625 04:02:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:25.625 04:02:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:25.625 04:02:34 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:25.625 04:02:34 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:25.625 04:02:34 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:25.625 04:02:34 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:25.625 04:02:34 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:25.625 04:02:34 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:25.625 04:02:34 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:25.625 04:02:34 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json') 00:07:25.625 04:02:34 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:25.625 04:02:34 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:25.625 04:02:34 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:25.625 04:02:34 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:07:25.625 INFO: JSON configuration test init 00:07:25.625 04:02:34 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:07:25.625 04:02:34 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:07:25.625 04:02:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:25.625 04:02:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.884 04:02:34 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:07:25.884 04:02:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:25.884 04:02:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.884 04:02:34 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:07:25.884 04:02:34 json_config -- json_config/common.sh@9 -- # local app=target 00:07:25.884 04:02:34 json_config -- json_config/common.sh@10 -- # shift 00:07:25.884 04:02:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:25.884 04:02:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:25.884 04:02:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:25.884 04:02:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:25.884 04:02:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:25.884 04:02:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2540114 00:07:25.884 04:02:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:25.884 Waiting for target to run... 00:07:25.884 04:02:34 json_config -- json_config/common.sh@25 -- # waitforlisten 2540114 /var/tmp/spdk_tgt.sock 00:07:25.884 04:02:34 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:25.884 04:02:34 json_config -- common/autotest_common.sh@829 -- # '[' -z 2540114 ']' 00:07:25.884 04:02:34 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:25.884 04:02:34 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.884 04:02:34 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:25.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:25.884 04:02:34 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.884 04:02:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.884 [2024-07-23 04:02:34.539976] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:25.884 [2024-07-23 04:02:34.540101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2540114 ] 00:07:26.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.451 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:26.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.451 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:26.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.451 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:26.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.451 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:26.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.451 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:26.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.451 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:26.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.452 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:26.452 [2024-07-23 04:02:35.157014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.710 [2024-07-23 04:02:35.425781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.645 04:02:36 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.645 04:02:36 json_config -- common/autotest_common.sh@862 -- # return 0 00:07:27.645 04:02:36 json_config -- json_config/common.sh@26 -- # echo '' 00:07:27.645 00:07:27.645 04:02:36 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:07:27.645 04:02:36 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:07:27.645 04:02:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:27.645 04:02:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:27.645 04:02:36 json_config -- json_config/json_config.sh@99 -- # [[ 1 -eq 1 ]] 00:07:27.645 04:02:36 json_config -- json_config/json_config.sh@100 -- # tgt_rpc dpdk_cryptodev_scan_accel_module 00:07:27.645 04:02:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock dpdk_cryptodev_scan_accel_module 00:07:27.645 04:02:36 json_config -- json_config/json_config.sh@101 -- # tgt_rpc accel_assign_opc -o encrypt -m dpdk_cryptodev 00:07:27.645 04:02:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o encrypt -m dpdk_cryptodev 00:07:27.904 [2024-07-23 04:02:36.533420] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:07:27.904 04:02:36 json_config -- json_config/json_config.sh@102 -- # tgt_rpc accel_assign_opc -o decrypt -m dpdk_cryptodev 00:07:27.904 04:02:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o decrypt -m dpdk_cryptodev 00:07:28.162 [2024-07-23 04:02:36.762044] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:07:28.162 04:02:36 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:07:28.162 04:02:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:28.162 04:02:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:28.162 04:02:36 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:28.162 04:02:36 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:07:28.162 04:02:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:28.730 [2024-07-23 04:02:37.356058] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:07:35.366 04:02:43 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:07:35.367 04:02:43 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:35.367 04:02:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:35.367 04:02:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:35.367 04:02:43 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:35.367 04:02:43 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:35.367 04:02:43 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:35.367 04:02:43 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:07:35.367 04:02:43 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:07:35.367 04:02:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:35.367 04:02:44 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:35.367 04:02:44 json_config -- json_config/json_config.sh@48 -- # local get_types 00:07:35.367 04:02:44 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:07:35.367 04:02:44 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:07:35.367 04:02:44 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:07:35.367 04:02:44 json_config -- json_config/json_config.sh@51 -- # sort 00:07:35.367 04:02:44 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:07:35.367 04:02:44 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:07:35.367 04:02:44 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:07:35.367 04:02:44 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:07:35.367 04:02:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:35.367 04:02:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:35.367 04:02:44 json_config -- json_config/json_config.sh@59 -- # return 0 00:07:35.367 04:02:44 json_config -- json_config/json_config.sh@282 -- # [[ 1 -eq 1 ]] 00:07:35.367 04:02:44 json_config -- json_config/json_config.sh@283 -- # create_bdev_subsystem_config 00:07:35.367 04:02:44 json_config -- json_config/json_config.sh@109 -- # timing_enter create_bdev_subsystem_config 00:07:35.367 04:02:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:35.367 04:02:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:35.367 04:02:44 json_config -- json_config/json_config.sh@111 -- # expected_notifications=() 00:07:35.367 04:02:44 json_config -- json_config/json_config.sh@111 -- # local expected_notifications 00:07:35.367 04:02:44 json_config -- json_config/json_config.sh@115 -- # expected_notifications+=($(get_notifications)) 00:07:35.367 04:02:44 json_config -- json_config/json_config.sh@115 -- # get_notifications 00:07:35.367 04:02:44 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:07:35.367 04:02:44 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:35.367 04:02:44 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:35.367 04:02:44 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:07:35.367 04:02:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:35.367 04:02:44 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:35.626 04:02:44 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:07:35.626 04:02:44 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:35.626 04:02:44 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:35.626 04:02:44 json_config -- json_config/json_config.sh@117 -- # [[ 1 -eq 1 ]] 00:07:35.626 04:02:44 json_config -- json_config/json_config.sh@118 -- # local lvol_store_base_bdev=Nvme0n1 00:07:35.626 04:02:44 json_config -- json_config/json_config.sh@120 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:07:35.626 04:02:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:07:35.884 Nvme0n1p0 Nvme0n1p1 00:07:35.885 04:02:44 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_split_create Malloc0 3 00:07:35.885 04:02:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:07:36.143 [2024-07-23 04:02:44.739723] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:36.143 [2024-07-23 04:02:44.739791] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:36.143 00:07:36.143 04:02:44 json_config -- json_config/json_config.sh@122 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:07:36.143 04:02:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:07:36.402 Malloc3 00:07:36.402 04:02:44 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:36.402 04:02:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:36.402 [2024-07-23 04:02:45.184057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:36.402 [2024-07-23 04:02:45.184120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.402 [2024-07-23 04:02:45.184162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:07:36.402 [2024-07-23 04:02:45.184182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.660 [2024-07-23 04:02:45.186995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.660 [2024-07-23 04:02:45.187031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:36.660 PTBdevFromMalloc3 00:07:36.660 04:02:45 json_config -- json_config/json_config.sh@125 -- # tgt_rpc bdev_null_create Null0 32 512 00:07:36.661 04:02:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:07:36.661 Null0 00:07:36.661 04:02:45 json_config -- json_config/json_config.sh@127 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:07:36.661 04:02:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:07:36.919 Malloc0 00:07:37.177 04:02:45 json_config -- json_config/json_config.sh@128 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:07:37.177 04:02:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:07:37.177 Malloc1 00:07:37.436 04:02:45 json_config -- json_config/json_config.sh@141 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:07:37.436 04:02:45 json_config -- json_config/json_config.sh@144 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:07:37.694 102400+0 records in 00:07:37.694 102400+0 records out 00:07:37.694 104857600 bytes (105 MB, 100 MiB) copied, 0.282241 s, 372 MB/s 00:07:37.694 04:02:46 json_config -- json_config/json_config.sh@145 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:07:37.694 04:02:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:07:37.694 aio_disk 00:07:37.953 04:02:46 json_config -- json_config/json_config.sh@146 -- # expected_notifications+=(bdev_register:aio_disk) 00:07:37.953 04:02:46 json_config -- json_config/json_config.sh@151 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:37.953 04:02:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:42.143 baea5d7a-a464-4f71-814d-7654f1f16574 00:07:42.143 04:02:50 json_config -- json_config/json_config.sh@158 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:07:42.143 04:02:50 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:07:42.143 04:02:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:07:42.143 04:02:50 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:07:42.143 04:02:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:07:42.402 04:02:51 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:42.402 04:02:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:42.660 04:02:51 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:42.660 04:02:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:42.919 04:02:51 json_config -- json_config/json_config.sh@161 -- # [[ 1 -eq 1 ]] 00:07:42.919 04:02:51 json_config -- json_config/json_config.sh@162 -- # tgt_rpc bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:07:42.919 04:02:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:07:43.178 MallocForCryptoBdev 00:07:43.178 04:02:51 json_config -- json_config/json_config.sh@163 -- # lspci -d:37c8 00:07:43.178 04:02:51 json_config -- json_config/json_config.sh@163 -- # wc -l 00:07:43.178 04:02:51 json_config -- json_config/json_config.sh@163 -- # [[ 5 -eq 0 ]] 00:07:43.178 04:02:51 json_config -- json_config/json_config.sh@166 -- # local crypto_driver=crypto_qat 00:07:43.178 04:02:51 json_config -- json_config/json_config.sh@169 -- # tgt_rpc bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:07:43.178 04:02:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:07:43.436 [2024-07-23 04:02:52.009514] vbdev_crypto_rpc.c: 136:rpc_bdev_crypto_create: *WARNING*: "crypto_pmd" parameters is obsolete and ignored 00:07:43.436 CryptoMallocBdev 00:07:43.436 04:02:52 json_config -- json_config/json_config.sh@173 -- # expected_notifications+=(bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev) 00:07:43.436 04:02:52 json_config -- json_config/json_config.sh@176 -- # [[ 0 -eq 1 ]] 00:07:43.436 04:02:52 json_config -- json_config/json_config.sh@182 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:a5322c21-8e27-4239-94db-1dee920846b3 bdev_register:45e2ca46-4678-459c-ad72-93a8eacff8a1 bdev_register:b4c3feb4-77a6-4658-bd18-01f0df01f8ad bdev_register:8088b137-d425-4ef1-a298-3165937097f8 bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:07:43.436 04:02:52 json_config -- json_config/json_config.sh@71 -- # local events_to_check 00:07:43.436 04:02:52 json_config -- json_config/json_config.sh@72 -- # local recorded_events 00:07:43.436 04:02:52 json_config -- json_config/json_config.sh@75 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:07:43.437 04:02:52 json_config -- json_config/json_config.sh@75 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:a5322c21-8e27-4239-94db-1dee920846b3 bdev_register:45e2ca46-4678-459c-ad72-93a8eacff8a1 bdev_register:b4c3feb4-77a6-4658-bd18-01f0df01f8ad bdev_register:8088b137-d425-4ef1-a298-3165937097f8 bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:07:43.437 04:02:52 json_config -- json_config/json_config.sh@75 -- # sort 00:07:43.437 04:02:52 json_config -- json_config/json_config.sh@76 -- # recorded_events=($(get_notifications | sort)) 00:07:43.437 04:02:52 json_config -- json_config/json_config.sh@76 -- # get_notifications 00:07:43.437 04:02:52 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:07:43.437 04:02:52 json_config -- json_config/json_config.sh@76 -- # sort 00:07:43.437 04:02:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:43.437 04:02:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:43.437 04:02:52 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:07:43.437 04:02:52 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:43.437 04:02:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p1 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p0 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc3 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:PTBdevFromMalloc3 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Null0 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p2 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p1 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p0 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc1 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:aio_disk 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:a5322c21-8e27-4239-94db-1dee920846b3 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:45e2ca46-4678-459c-ad72-93a8eacff8a1 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:b4c3feb4-77a6-4658-bd18-01f0df01f8ad 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:8088b137-d425-4ef1-a298-3165937097f8 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:MallocForCryptoBdev 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:CryptoMallocBdev 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@78 -- # [[ bdev_register:45e2ca46-4678-459c-ad72-93a8eacff8a1 bdev_register:8088b137-d425-4ef1-a298-3165937097f8 bdev_register:a5322c21-8e27-4239-94db-1dee920846b3 bdev_register:aio_disk bdev_register:b4c3feb4-77a6-4658-bd18-01f0df01f8ad bdev_register:CryptoMallocBdev bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\4\5\e\2\c\a\4\6\-\4\6\7\8\-\4\5\9\c\-\a\d\7\2\-\9\3\a\8\e\a\c\f\f\8\a\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\0\8\8\b\1\3\7\-\d\4\2\5\-\4\e\f\1\-\a\2\9\8\-\3\1\6\5\9\3\7\0\9\7\f\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\5\3\2\2\c\2\1\-\8\e\2\7\-\4\2\3\9\-\9\4\d\b\-\1\d\e\e\9\2\0\8\4\6\b\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\b\4\c\3\f\e\b\4\-\7\7\a\6\-\4\6\5\8\-\b\d\1\8\-\0\1\f\0\d\f\0\1\f\8\a\d\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\C\r\y\p\t\o\M\a\l\l\o\c\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\F\o\r\C\r\y\p\t\o\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3 ]] 00:07:43.696 04:02:52 json_config -- json_config/json_config.sh@90 -- # cat 00:07:43.697 04:02:52 json_config -- json_config/json_config.sh@90 -- # printf ' %s\n' bdev_register:45e2ca46-4678-459c-ad72-93a8eacff8a1 bdev_register:8088b137-d425-4ef1-a298-3165937097f8 bdev_register:a5322c21-8e27-4239-94db-1dee920846b3 bdev_register:aio_disk bdev_register:b4c3feb4-77a6-4658-bd18-01f0df01f8ad bdev_register:CryptoMallocBdev bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 00:07:43.697 Expected events matched: 00:07:43.697 bdev_register:45e2ca46-4678-459c-ad72-93a8eacff8a1 00:07:43.697 bdev_register:8088b137-d425-4ef1-a298-3165937097f8 00:07:43.697 bdev_register:a5322c21-8e27-4239-94db-1dee920846b3 00:07:43.697 bdev_register:aio_disk 00:07:43.697 bdev_register:b4c3feb4-77a6-4658-bd18-01f0df01f8ad 00:07:43.697 bdev_register:CryptoMallocBdev 00:07:43.697 bdev_register:Malloc0 00:07:43.697 bdev_register:Malloc0p0 00:07:43.697 bdev_register:Malloc0p1 00:07:43.697 bdev_register:Malloc0p2 00:07:43.697 bdev_register:Malloc1 00:07:43.697 bdev_register:Malloc3 00:07:43.697 bdev_register:MallocForCryptoBdev 00:07:43.697 bdev_register:Null0 00:07:43.697 bdev_register:Nvme0n1 00:07:43.697 bdev_register:Nvme0n1p0 00:07:43.697 bdev_register:Nvme0n1p1 00:07:43.697 bdev_register:PTBdevFromMalloc3 00:07:43.697 04:02:52 json_config -- json_config/json_config.sh@184 -- # timing_exit create_bdev_subsystem_config 00:07:43.697 04:02:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:43.697 04:02:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:43.697 04:02:52 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:07:43.697 04:02:52 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:07:43.697 04:02:52 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:07:43.697 04:02:52 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:07:43.697 04:02:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:43.697 04:02:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:43.697 04:02:52 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:07:43.697 04:02:52 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:43.697 04:02:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:43.956 MallocBdevForConfigChangeCheck 00:07:43.956 04:02:52 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:07:43.956 04:02:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:43.956 04:02:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:43.956 04:02:52 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:07:43.956 04:02:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:44.523 04:02:53 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:07:44.523 INFO: shutting down applications... 00:07:44.523 04:02:53 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:07:44.523 04:02:53 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:07:44.523 04:02:53 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:07:44.523 04:02:53 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:44.523 [2024-07-23 04:02:53.213871] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:07:47.806 Calling clear_iscsi_subsystem 00:07:47.806 Calling clear_nvmf_subsystem 00:07:47.806 Calling clear_nbd_subsystem 00:07:47.806 Calling clear_ublk_subsystem 00:07:47.806 Calling clear_vhost_blk_subsystem 00:07:47.806 Calling clear_vhost_scsi_subsystem 00:07:47.806 Calling clear_bdev_subsystem 00:07:47.806 04:02:55 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py 00:07:47.806 04:02:55 json_config -- json_config/json_config.sh@347 -- # count=100 00:07:47.806 04:02:55 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:07:47.806 04:02:55 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:47.806 04:02:55 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:47.806 04:02:55 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:47.806 04:02:56 json_config -- json_config/json_config.sh@349 -- # break 00:07:47.806 04:02:56 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:07:47.806 04:02:56 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:07:47.806 04:02:56 json_config -- json_config/common.sh@31 -- # local app=target 00:07:47.806 04:02:56 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:47.806 04:02:56 json_config -- json_config/common.sh@35 -- # [[ -n 2540114 ]] 00:07:47.806 04:02:56 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2540114 00:07:47.806 04:02:56 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:47.806 04:02:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:47.806 04:02:56 json_config -- json_config/common.sh@41 -- # kill -0 2540114 00:07:47.806 04:02:56 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:48.065 04:02:56 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:48.065 04:02:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:48.065 04:02:56 json_config -- json_config/common.sh@41 -- # kill -0 2540114 00:07:48.065 04:02:56 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:48.632 04:02:57 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:48.632 04:02:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:48.632 04:02:57 json_config -- json_config/common.sh@41 -- # kill -0 2540114 00:07:48.632 04:02:57 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:49.199 04:02:57 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:49.199 04:02:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:49.199 04:02:57 json_config -- json_config/common.sh@41 -- # kill -0 2540114 00:07:49.199 04:02:57 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:49.768 04:02:58 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:49.768 04:02:58 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:49.768 04:02:58 json_config -- json_config/common.sh@41 -- # kill -0 2540114 00:07:49.768 04:02:58 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:50.336 04:02:58 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:50.336 04:02:58 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:50.336 04:02:58 json_config -- json_config/common.sh@41 -- # kill -0 2540114 00:07:50.336 04:02:58 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:50.336 04:02:58 json_config -- json_config/common.sh@43 -- # break 00:07:50.336 04:02:58 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:50.336 04:02:58 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:50.336 SPDK target shutdown done 00:07:50.336 04:02:58 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:07:50.336 INFO: relaunching applications... 00:07:50.336 04:02:58 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:50.336 04:02:58 json_config -- json_config/common.sh@9 -- # local app=target 00:07:50.336 04:02:58 json_config -- json_config/common.sh@10 -- # shift 00:07:50.336 04:02:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:50.336 04:02:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:50.336 04:02:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:50.336 04:02:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:50.336 04:02:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:50.336 04:02:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2544475 00:07:50.336 04:02:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:50.336 Waiting for target to run... 00:07:50.336 04:02:58 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:50.336 04:02:58 json_config -- json_config/common.sh@25 -- # waitforlisten 2544475 /var/tmp/spdk_tgt.sock 00:07:50.336 04:02:58 json_config -- common/autotest_common.sh@829 -- # '[' -z 2544475 ']' 00:07:50.336 04:02:58 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:50.336 04:02:58 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.336 04:02:58 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:50.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:50.336 04:02:58 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.336 04:02:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:50.336 [2024-07-23 04:02:58.976994] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:50.336 [2024-07-23 04:02:58.977118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2544475 ] 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:50.904 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.904 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:50.904 [2024-07-23 04:02:59.594625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.163 [2024-07-23 04:02:59.876687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.163 [2024-07-23 04:02:59.931376] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:07:51.163 [2024-07-23 04:02:59.939417] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:07:51.422 [2024-07-23 04:02:59.947432] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:07:51.681 [2024-07-23 04:03:00.326521] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:07:55.873 [2024-07-23 04:03:03.781614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:55.873 [2024-07-23 04:03:03.781690] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:55.873 [2024-07-23 04:03:03.781711] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:55.873 [2024-07-23 04:03:03.789633] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:55.873 [2024-07-23 04:03:03.789684] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:55.873 [2024-07-23 04:03:03.797642] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:55.873 [2024-07-23 04:03:03.797681] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:55.873 [2024-07-23 04:03:03.805683] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "CryptoMallocBdev_AES_CBC" 00:07:55.873 [2024-07-23 04:03:03.805743] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: MallocForCryptoBdev 00:07:55.873 [2024-07-23 04:03:03.805770] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:58.405 [2024-07-23 04:03:06.789895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:58.405 [2024-07-23 04:03:06.789972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.405 [2024-07-23 04:03:06.789995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:07:58.405 [2024-07-23 04:03:06.790010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.405 [2024-07-23 04:03:06.790563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.405 [2024-07-23 04:03:06.790590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:58.971 04:03:07 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.971 04:03:07 json_config -- common/autotest_common.sh@862 -- # return 0 00:07:58.971 04:03:07 json_config -- json_config/common.sh@26 -- # echo '' 00:07:58.971 00:07:58.971 04:03:07 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:07:58.971 04:03:07 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:58.971 INFO: Checking if target configuration is the same... 00:07:58.971 04:03:07 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:58.972 04:03:07 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:07:58.972 04:03:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:58.972 + '[' 2 -ne 2 ']' 00:07:58.972 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:58.972 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:07:58.972 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:07:58.972 +++ basename /dev/fd/62 00:07:58.972 ++ mktemp /tmp/62.XXX 00:07:58.972 + tmp_file_1=/tmp/62.bo5 00:07:58.972 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:58.972 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:58.972 + tmp_file_2=/tmp/spdk_tgt_config.json.JVp 00:07:58.972 + ret=0 00:07:58.972 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:59.230 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:59.230 + diff -u /tmp/62.bo5 /tmp/spdk_tgt_config.json.JVp 00:07:59.230 + echo 'INFO: JSON config files are the same' 00:07:59.230 INFO: JSON config files are the same 00:07:59.230 + rm /tmp/62.bo5 /tmp/spdk_tgt_config.json.JVp 00:07:59.230 + exit 0 00:07:59.230 04:03:07 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:07:59.230 04:03:07 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:59.230 INFO: changing configuration and checking if this can be detected... 00:07:59.230 04:03:07 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:59.230 04:03:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:59.489 04:03:08 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:07:59.489 04:03:08 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:59.489 04:03:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:59.489 + '[' 2 -ne 2 ']' 00:07:59.489 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:59.489 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:07:59.489 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:07:59.489 +++ basename /dev/fd/62 00:07:59.489 ++ mktemp /tmp/62.XXX 00:07:59.489 + tmp_file_1=/tmp/62.BsN 00:07:59.489 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:59.489 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:59.489 + tmp_file_2=/tmp/spdk_tgt_config.json.gKY 00:07:59.489 + ret=0 00:07:59.489 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:59.747 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:00.006 + diff -u /tmp/62.BsN /tmp/spdk_tgt_config.json.gKY 00:08:00.006 + ret=1 00:08:00.006 + echo '=== Start of file: /tmp/62.BsN ===' 00:08:00.006 + cat /tmp/62.BsN 00:08:00.006 + echo '=== End of file: /tmp/62.BsN ===' 00:08:00.006 + echo '' 00:08:00.006 + echo '=== Start of file: /tmp/spdk_tgt_config.json.gKY ===' 00:08:00.006 + cat /tmp/spdk_tgt_config.json.gKY 00:08:00.006 + echo '=== End of file: /tmp/spdk_tgt_config.json.gKY ===' 00:08:00.006 + echo '' 00:08:00.006 + rm /tmp/62.BsN /tmp/spdk_tgt_config.json.gKY 00:08:00.006 + exit 1 00:08:00.006 04:03:08 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:08:00.006 INFO: configuration change detected. 00:08:00.006 04:03:08 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:08:00.006 04:03:08 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:08:00.006 04:03:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:00.006 04:03:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 04:03:08 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:08:00.006 04:03:08 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:08:00.006 04:03:08 json_config -- json_config/json_config.sh@321 -- # [[ -n 2544475 ]] 00:08:00.006 04:03:08 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:08:00.006 04:03:08 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:08:00.006 04:03:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:00.006 04:03:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:00.006 04:03:08 json_config -- json_config/json_config.sh@190 -- # [[ 1 -eq 1 ]] 00:08:00.006 04:03:08 json_config -- json_config/json_config.sh@191 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:08:00.006 04:03:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:08:00.264 04:03:08 json_config -- json_config/json_config.sh@192 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:08:00.264 04:03:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:08:00.521 04:03:09 json_config -- json_config/json_config.sh@193 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:08:00.521 04:03:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:08:00.780 04:03:09 json_config -- json_config/json_config.sh@194 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:08:00.780 04:03:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:08:01.038 04:03:09 json_config -- json_config/json_config.sh@197 -- # uname -s 00:08:01.038 04:03:09 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:08:01.038 04:03:09 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:08:01.038 04:03:09 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:08:01.038 04:03:09 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:08:01.038 04:03:09 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:01.038 04:03:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:01.038 04:03:09 json_config -- json_config/json_config.sh@327 -- # killprocess 2544475 00:08:01.038 04:03:09 json_config -- common/autotest_common.sh@948 -- # '[' -z 2544475 ']' 00:08:01.038 04:03:09 json_config -- common/autotest_common.sh@952 -- # kill -0 2544475 00:08:01.038 04:03:09 json_config -- common/autotest_common.sh@953 -- # uname 00:08:01.038 04:03:09 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:01.038 04:03:09 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2544475 00:08:01.038 04:03:09 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:01.038 04:03:09 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:01.038 04:03:09 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2544475' 00:08:01.038 killing process with pid 2544475 00:08:01.038 04:03:09 json_config -- common/autotest_common.sh@967 -- # kill 2544475 00:08:01.038 04:03:09 json_config -- common/autotest_common.sh@972 -- # wait 2544475 00:08:06.321 04:03:14 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:08:06.321 04:03:14 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:08:06.321 04:03:14 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:06.321 04:03:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:06.321 04:03:14 json_config -- json_config/json_config.sh@332 -- # return 0 00:08:06.321 04:03:14 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:08:06.321 INFO: Success 00:08:06.321 00:08:06.321 real 0m40.216s 00:08:06.322 user 0m44.574s 00:08:06.322 sys 0m4.575s 00:08:06.322 04:03:14 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.322 04:03:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:06.322 ************************************ 00:08:06.322 END TEST json_config 00:08:06.322 ************************************ 00:08:06.322 04:03:14 -- common/autotest_common.sh@1142 -- # return 0 00:08:06.322 04:03:14 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:06.322 04:03:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:06.322 04:03:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.322 04:03:14 -- common/autotest_common.sh@10 -- # set +x 00:08:06.322 ************************************ 00:08:06.322 START TEST json_config_extra_key 00:08:06.322 ************************************ 00:08:06.322 04:03:14 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:06.322 04:03:14 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:08:06.322 04:03:14 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.322 04:03:14 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.322 04:03:14 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.322 04:03:14 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.322 04:03:14 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.322 04:03:14 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.322 04:03:14 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:06.322 04:03:14 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:06.322 04:03:14 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:06.322 04:03:14 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:08:06.322 04:03:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:06.322 04:03:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:06.322 04:03:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:06.322 04:03:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:06.322 04:03:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:06.322 04:03:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:06.322 04:03:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json') 00:08:06.322 04:03:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:06.322 04:03:14 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:06.322 04:03:14 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:06.322 INFO: launching applications... 00:08:06.322 04:03:14 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:08:06.322 04:03:14 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:06.322 04:03:14 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:06.322 04:03:14 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:06.322 04:03:14 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:06.322 04:03:14 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:06.322 04:03:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:06.322 04:03:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:06.322 04:03:14 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2547955 00:08:06.322 04:03:14 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:06.322 Waiting for target to run... 00:08:06.322 04:03:14 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2547955 /var/tmp/spdk_tgt.sock 00:08:06.322 04:03:14 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2547955 ']' 00:08:06.322 04:03:14 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:06.322 04:03:14 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:08:06.322 04:03:14 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:06.322 04:03:14 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:06.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:06.322 04:03:14 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:06.322 04:03:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:06.322 [2024-07-23 04:03:14.822343] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:06.322 [2024-07-23 04:03:14.822468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2547955 ] 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:06.582 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:06.582 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:06.841 [2024-07-23 04:03:15.438880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.100 [2024-07-23 04:03:15.711645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.035 04:03:16 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:08.035 04:03:16 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:08:08.035 04:03:16 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:08.035 00:08:08.035 04:03:16 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:08.035 INFO: shutting down applications... 00:08:08.035 04:03:16 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:08.035 04:03:16 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:08.035 04:03:16 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:08.035 04:03:16 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2547955 ]] 00:08:08.035 04:03:16 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2547955 00:08:08.035 04:03:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:08.035 04:03:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:08.035 04:03:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2547955 00:08:08.035 04:03:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:08.603 04:03:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:08.603 04:03:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:08.603 04:03:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2547955 00:08:08.603 04:03:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:09.169 04:03:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:09.169 04:03:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:09.169 04:03:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2547955 00:08:09.169 04:03:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:09.737 04:03:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:09.737 04:03:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:09.737 04:03:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2547955 00:08:09.737 04:03:18 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:09.737 04:03:18 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:09.737 04:03:18 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:09.737 04:03:18 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:09.737 SPDK target shutdown done 00:08:09.737 04:03:18 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:09.737 Success 00:08:09.737 00:08:09.737 real 0m3.733s 00:08:09.737 user 0m3.196s 00:08:09.737 sys 0m0.900s 00:08:09.737 04:03:18 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.737 04:03:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:09.737 ************************************ 00:08:09.737 END TEST json_config_extra_key 00:08:09.737 ************************************ 00:08:09.737 04:03:18 -- common/autotest_common.sh@1142 -- # return 0 00:08:09.737 04:03:18 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:09.737 04:03:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:09.737 04:03:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.737 04:03:18 -- common/autotest_common.sh@10 -- # set +x 00:08:09.737 ************************************ 00:08:09.737 START TEST alias_rpc 00:08:09.737 ************************************ 00:08:09.737 04:03:18 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:09.737 * Looking for test storage... 00:08:09.737 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc 00:08:09.737 04:03:18 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:09.737 04:03:18 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2548626 00:08:09.737 04:03:18 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2548626 00:08:09.737 04:03:18 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:08:09.737 04:03:18 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2548626 ']' 00:08:09.737 04:03:18 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.737 04:03:18 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:09.737 04:03:18 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.737 04:03:18 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:09.737 04:03:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.995 [2024-07-23 04:03:18.626214] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:09.995 [2024-07-23 04:03:18.626330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2548626 ] 00:08:09.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.995 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:09.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.995 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:09.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.995 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:09.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.995 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:09.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.995 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:09.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:09.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:09.996 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:10.254 [2024-07-23 04:03:18.853937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.513 [2024-07-23 04:03:19.113430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.889 04:03:20 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:11.889 04:03:20 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:11.889 04:03:20 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_config -i 00:08:11.889 04:03:20 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2548626 00:08:11.889 04:03:20 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2548626 ']' 00:08:11.889 04:03:20 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2548626 00:08:11.889 04:03:20 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:08:11.889 04:03:20 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:11.889 04:03:20 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2548626 00:08:11.889 04:03:20 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:11.889 04:03:20 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:11.889 04:03:20 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2548626' 00:08:11.889 killing process with pid 2548626 00:08:11.889 04:03:20 alias_rpc -- common/autotest_common.sh@967 -- # kill 2548626 00:08:11.889 04:03:20 alias_rpc -- common/autotest_common.sh@972 -- # wait 2548626 00:08:16.080 00:08:16.080 real 0m5.615s 00:08:16.080 user 0m5.525s 00:08:16.080 sys 0m0.772s 00:08:16.080 04:03:24 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.080 04:03:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.080 ************************************ 00:08:16.080 END TEST alias_rpc 00:08:16.080 ************************************ 00:08:16.080 04:03:24 -- common/autotest_common.sh@1142 -- # return 0 00:08:16.080 04:03:24 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:08:16.080 04:03:24 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:16.080 04:03:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:16.080 04:03:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.080 04:03:24 -- common/autotest_common.sh@10 -- # set +x 00:08:16.080 ************************************ 00:08:16.080 START TEST spdkcli_tcp 00:08:16.080 ************************************ 00:08:16.080 04:03:24 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:16.080 * Looking for test storage... 00:08:16.080 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli 00:08:16.080 04:03:24 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/common.sh 00:08:16.080 04:03:24 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:08:16.080 04:03:24 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py 00:08:16.080 04:03:24 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:16.080 04:03:24 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:16.081 04:03:24 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:16.081 04:03:24 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:16.081 04:03:24 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:16.081 04:03:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:16.081 04:03:24 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2549724 00:08:16.081 04:03:24 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2549724 00:08:16.081 04:03:24 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:16.081 04:03:24 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2549724 ']' 00:08:16.081 04:03:24 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.081 04:03:24 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:16.081 04:03:24 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.081 04:03:24 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:16.081 04:03:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:16.081 [2024-07-23 04:03:24.333295] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:16.081 [2024-07-23 04:03:24.333415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2549724 ] 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.081 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:16.081 [2024-07-23 04:03:24.560317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:16.081 [2024-07-23 04:03:24.845511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.081 [2024-07-23 04:03:24.845520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.458 04:03:26 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:17.458 04:03:26 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:08:17.458 04:03:26 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2549998 00:08:17.458 04:03:26 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:17.459 04:03:26 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:17.718 [ 00:08:17.718 "bdev_malloc_delete", 00:08:17.718 "bdev_malloc_create", 00:08:17.718 "bdev_null_resize", 00:08:17.718 "bdev_null_delete", 00:08:17.718 "bdev_null_create", 00:08:17.718 "bdev_nvme_cuse_unregister", 00:08:17.718 "bdev_nvme_cuse_register", 00:08:17.718 "bdev_opal_new_user", 00:08:17.718 "bdev_opal_set_lock_state", 00:08:17.718 "bdev_opal_delete", 00:08:17.718 "bdev_opal_get_info", 00:08:17.718 "bdev_opal_create", 00:08:17.718 "bdev_nvme_opal_revert", 00:08:17.718 "bdev_nvme_opal_init", 00:08:17.718 "bdev_nvme_send_cmd", 00:08:17.718 "bdev_nvme_get_path_iostat", 00:08:17.718 "bdev_nvme_get_mdns_discovery_info", 00:08:17.718 "bdev_nvme_stop_mdns_discovery", 00:08:17.718 "bdev_nvme_start_mdns_discovery", 00:08:17.718 "bdev_nvme_set_multipath_policy", 00:08:17.718 "bdev_nvme_set_preferred_path", 00:08:17.718 "bdev_nvme_get_io_paths", 00:08:17.718 "bdev_nvme_remove_error_injection", 00:08:17.718 "bdev_nvme_add_error_injection", 00:08:17.718 "bdev_nvme_get_discovery_info", 00:08:17.718 "bdev_nvme_stop_discovery", 00:08:17.718 "bdev_nvme_start_discovery", 00:08:17.718 "bdev_nvme_get_controller_health_info", 00:08:17.718 "bdev_nvme_disable_controller", 00:08:17.718 "bdev_nvme_enable_controller", 00:08:17.718 "bdev_nvme_reset_controller", 00:08:17.718 "bdev_nvme_get_transport_statistics", 00:08:17.718 "bdev_nvme_apply_firmware", 00:08:17.718 "bdev_nvme_detach_controller", 00:08:17.718 "bdev_nvme_get_controllers", 00:08:17.718 "bdev_nvme_attach_controller", 00:08:17.718 "bdev_nvme_set_hotplug", 00:08:17.718 "bdev_nvme_set_options", 00:08:17.718 "bdev_passthru_delete", 00:08:17.718 "bdev_passthru_create", 00:08:17.718 "bdev_lvol_set_parent_bdev", 00:08:17.718 "bdev_lvol_set_parent", 00:08:17.718 "bdev_lvol_check_shallow_copy", 00:08:17.718 "bdev_lvol_start_shallow_copy", 00:08:17.718 "bdev_lvol_grow_lvstore", 00:08:17.718 "bdev_lvol_get_lvols", 00:08:17.718 "bdev_lvol_get_lvstores", 00:08:17.718 "bdev_lvol_delete", 00:08:17.718 "bdev_lvol_set_read_only", 00:08:17.718 "bdev_lvol_resize", 00:08:17.718 "bdev_lvol_decouple_parent", 00:08:17.718 "bdev_lvol_inflate", 00:08:17.718 "bdev_lvol_rename", 00:08:17.718 "bdev_lvol_clone_bdev", 00:08:17.718 "bdev_lvol_clone", 00:08:17.718 "bdev_lvol_snapshot", 00:08:17.718 "bdev_lvol_create", 00:08:17.718 "bdev_lvol_delete_lvstore", 00:08:17.718 "bdev_lvol_rename_lvstore", 00:08:17.718 "bdev_lvol_create_lvstore", 00:08:17.718 "bdev_raid_set_options", 00:08:17.718 "bdev_raid_remove_base_bdev", 00:08:17.718 "bdev_raid_add_base_bdev", 00:08:17.718 "bdev_raid_delete", 00:08:17.718 "bdev_raid_create", 00:08:17.718 "bdev_raid_get_bdevs", 00:08:17.718 "bdev_error_inject_error", 00:08:17.718 "bdev_error_delete", 00:08:17.718 "bdev_error_create", 00:08:17.718 "bdev_split_delete", 00:08:17.718 "bdev_split_create", 00:08:17.718 "bdev_delay_delete", 00:08:17.718 "bdev_delay_create", 00:08:17.718 "bdev_delay_update_latency", 00:08:17.718 "bdev_zone_block_delete", 00:08:17.718 "bdev_zone_block_create", 00:08:17.718 "blobfs_create", 00:08:17.718 "blobfs_detect", 00:08:17.718 "blobfs_set_cache_size", 00:08:17.718 "bdev_crypto_delete", 00:08:17.718 "bdev_crypto_create", 00:08:17.718 "bdev_compress_delete", 00:08:17.718 "bdev_compress_create", 00:08:17.718 "bdev_compress_get_orphans", 00:08:17.718 "bdev_aio_delete", 00:08:17.718 "bdev_aio_rescan", 00:08:17.718 "bdev_aio_create", 00:08:17.718 "bdev_ftl_set_property", 00:08:17.718 "bdev_ftl_get_properties", 00:08:17.718 "bdev_ftl_get_stats", 00:08:17.718 "bdev_ftl_unmap", 00:08:17.718 "bdev_ftl_unload", 00:08:17.718 "bdev_ftl_delete", 00:08:17.718 "bdev_ftl_load", 00:08:17.718 "bdev_ftl_create", 00:08:17.718 "bdev_virtio_attach_controller", 00:08:17.718 "bdev_virtio_scsi_get_devices", 00:08:17.718 "bdev_virtio_detach_controller", 00:08:17.718 "bdev_virtio_blk_set_hotplug", 00:08:17.718 "bdev_iscsi_delete", 00:08:17.718 "bdev_iscsi_create", 00:08:17.718 "bdev_iscsi_set_options", 00:08:17.718 "accel_error_inject_error", 00:08:17.718 "ioat_scan_accel_module", 00:08:17.718 "dsa_scan_accel_module", 00:08:17.718 "iaa_scan_accel_module", 00:08:17.718 "dpdk_cryptodev_get_driver", 00:08:17.718 "dpdk_cryptodev_set_driver", 00:08:17.718 "dpdk_cryptodev_scan_accel_module", 00:08:17.718 "compressdev_scan_accel_module", 00:08:17.718 "keyring_file_remove_key", 00:08:17.718 "keyring_file_add_key", 00:08:17.718 "keyring_linux_set_options", 00:08:17.718 "iscsi_get_histogram", 00:08:17.718 "iscsi_enable_histogram", 00:08:17.718 "iscsi_set_options", 00:08:17.718 "iscsi_get_auth_groups", 00:08:17.718 "iscsi_auth_group_remove_secret", 00:08:17.718 "iscsi_auth_group_add_secret", 00:08:17.718 "iscsi_delete_auth_group", 00:08:17.718 "iscsi_create_auth_group", 00:08:17.718 "iscsi_set_discovery_auth", 00:08:17.718 "iscsi_get_options", 00:08:17.718 "iscsi_target_node_request_logout", 00:08:17.718 "iscsi_target_node_set_redirect", 00:08:17.718 "iscsi_target_node_set_auth", 00:08:17.718 "iscsi_target_node_add_lun", 00:08:17.718 "iscsi_get_stats", 00:08:17.718 "iscsi_get_connections", 00:08:17.718 "iscsi_portal_group_set_auth", 00:08:17.718 "iscsi_start_portal_group", 00:08:17.718 "iscsi_delete_portal_group", 00:08:17.718 "iscsi_create_portal_group", 00:08:17.718 "iscsi_get_portal_groups", 00:08:17.718 "iscsi_delete_target_node", 00:08:17.718 "iscsi_target_node_remove_pg_ig_maps", 00:08:17.718 "iscsi_target_node_add_pg_ig_maps", 00:08:17.718 "iscsi_create_target_node", 00:08:17.718 "iscsi_get_target_nodes", 00:08:17.718 "iscsi_delete_initiator_group", 00:08:17.718 "iscsi_initiator_group_remove_initiators", 00:08:17.718 "iscsi_initiator_group_add_initiators", 00:08:17.718 "iscsi_create_initiator_group", 00:08:17.718 "iscsi_get_initiator_groups", 00:08:17.718 "nvmf_set_crdt", 00:08:17.718 "nvmf_set_config", 00:08:17.718 "nvmf_set_max_subsystems", 00:08:17.718 "nvmf_stop_mdns_prr", 00:08:17.718 "nvmf_publish_mdns_prr", 00:08:17.718 "nvmf_subsystem_get_listeners", 00:08:17.718 "nvmf_subsystem_get_qpairs", 00:08:17.718 "nvmf_subsystem_get_controllers", 00:08:17.718 "nvmf_get_stats", 00:08:17.719 "nvmf_get_transports", 00:08:17.719 "nvmf_create_transport", 00:08:17.719 "nvmf_get_targets", 00:08:17.719 "nvmf_delete_target", 00:08:17.719 "nvmf_create_target", 00:08:17.719 "nvmf_subsystem_allow_any_host", 00:08:17.719 "nvmf_subsystem_remove_host", 00:08:17.719 "nvmf_subsystem_add_host", 00:08:17.719 "nvmf_ns_remove_host", 00:08:17.719 "nvmf_ns_add_host", 00:08:17.719 "nvmf_subsystem_remove_ns", 00:08:17.719 "nvmf_subsystem_add_ns", 00:08:17.719 "nvmf_subsystem_listener_set_ana_state", 00:08:17.719 "nvmf_discovery_get_referrals", 00:08:17.719 "nvmf_discovery_remove_referral", 00:08:17.719 "nvmf_discovery_add_referral", 00:08:17.719 "nvmf_subsystem_remove_listener", 00:08:17.719 "nvmf_subsystem_add_listener", 00:08:17.719 "nvmf_delete_subsystem", 00:08:17.719 "nvmf_create_subsystem", 00:08:17.719 "nvmf_get_subsystems", 00:08:17.719 "env_dpdk_get_mem_stats", 00:08:17.719 "nbd_get_disks", 00:08:17.719 "nbd_stop_disk", 00:08:17.719 "nbd_start_disk", 00:08:17.719 "ublk_recover_disk", 00:08:17.719 "ublk_get_disks", 00:08:17.719 "ublk_stop_disk", 00:08:17.719 "ublk_start_disk", 00:08:17.719 "ublk_destroy_target", 00:08:17.719 "ublk_create_target", 00:08:17.719 "virtio_blk_create_transport", 00:08:17.719 "virtio_blk_get_transports", 00:08:17.719 "vhost_controller_set_coalescing", 00:08:17.719 "vhost_get_controllers", 00:08:17.719 "vhost_delete_controller", 00:08:17.719 "vhost_create_blk_controller", 00:08:17.719 "vhost_scsi_controller_remove_target", 00:08:17.719 "vhost_scsi_controller_add_target", 00:08:17.719 "vhost_start_scsi_controller", 00:08:17.719 "vhost_create_scsi_controller", 00:08:17.719 "thread_set_cpumask", 00:08:17.719 "framework_get_governor", 00:08:17.719 "framework_get_scheduler", 00:08:17.719 "framework_set_scheduler", 00:08:17.719 "framework_get_reactors", 00:08:17.719 "thread_get_io_channels", 00:08:17.719 "thread_get_pollers", 00:08:17.719 "thread_get_stats", 00:08:17.719 "framework_monitor_context_switch", 00:08:17.719 "spdk_kill_instance", 00:08:17.719 "log_enable_timestamps", 00:08:17.719 "log_get_flags", 00:08:17.719 "log_clear_flag", 00:08:17.719 "log_set_flag", 00:08:17.719 "log_get_level", 00:08:17.719 "log_set_level", 00:08:17.719 "log_get_print_level", 00:08:17.719 "log_set_print_level", 00:08:17.719 "framework_enable_cpumask_locks", 00:08:17.719 "framework_disable_cpumask_locks", 00:08:17.719 "framework_wait_init", 00:08:17.719 "framework_start_init", 00:08:17.719 "scsi_get_devices", 00:08:17.719 "bdev_get_histogram", 00:08:17.719 "bdev_enable_histogram", 00:08:17.719 "bdev_set_qos_limit", 00:08:17.719 "bdev_set_qd_sampling_period", 00:08:17.719 "bdev_get_bdevs", 00:08:17.719 "bdev_reset_iostat", 00:08:17.719 "bdev_get_iostat", 00:08:17.719 "bdev_examine", 00:08:17.719 "bdev_wait_for_examine", 00:08:17.719 "bdev_set_options", 00:08:17.719 "notify_get_notifications", 00:08:17.719 "notify_get_types", 00:08:17.719 "accel_get_stats", 00:08:17.719 "accel_set_options", 00:08:17.719 "accel_set_driver", 00:08:17.719 "accel_crypto_key_destroy", 00:08:17.719 "accel_crypto_keys_get", 00:08:17.719 "accel_crypto_key_create", 00:08:17.719 "accel_assign_opc", 00:08:17.719 "accel_get_module_info", 00:08:17.719 "accel_get_opc_assignments", 00:08:17.719 "vmd_rescan", 00:08:17.719 "vmd_remove_device", 00:08:17.719 "vmd_enable", 00:08:17.719 "sock_get_default_impl", 00:08:17.719 "sock_set_default_impl", 00:08:17.719 "sock_impl_set_options", 00:08:17.719 "sock_impl_get_options", 00:08:17.719 "iobuf_get_stats", 00:08:17.719 "iobuf_set_options", 00:08:17.719 "framework_get_pci_devices", 00:08:17.719 "framework_get_config", 00:08:17.719 "framework_get_subsystems", 00:08:17.719 "trace_get_info", 00:08:17.719 "trace_get_tpoint_group_mask", 00:08:17.719 "trace_disable_tpoint_group", 00:08:17.719 "trace_enable_tpoint_group", 00:08:17.719 "trace_clear_tpoint_mask", 00:08:17.719 "trace_set_tpoint_mask", 00:08:17.719 "keyring_get_keys", 00:08:17.719 "spdk_get_version", 00:08:17.719 "rpc_get_methods" 00:08:17.719 ] 00:08:17.719 04:03:26 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:17.719 04:03:26 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:17.719 04:03:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:17.719 04:03:26 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:17.719 04:03:26 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2549724 00:08:17.719 04:03:26 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2549724 ']' 00:08:17.719 04:03:26 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2549724 00:08:17.719 04:03:26 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:08:17.719 04:03:26 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:17.719 04:03:26 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2549724 00:08:17.719 04:03:26 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:17.719 04:03:26 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:17.719 04:03:26 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2549724' 00:08:17.719 killing process with pid 2549724 00:08:17.719 04:03:26 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2549724 00:08:17.719 04:03:26 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2549724 00:08:21.044 00:08:21.044 real 0m5.628s 00:08:21.044 user 0m9.886s 00:08:21.044 sys 0m0.783s 00:08:21.044 04:03:29 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.044 04:03:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:21.044 ************************************ 00:08:21.044 END TEST spdkcli_tcp 00:08:21.044 ************************************ 00:08:21.044 04:03:29 -- common/autotest_common.sh@1142 -- # return 0 00:08:21.044 04:03:29 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:21.044 04:03:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:21.044 04:03:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.044 04:03:29 -- common/autotest_common.sh@10 -- # set +x 00:08:21.044 ************************************ 00:08:21.044 START TEST dpdk_mem_utility 00:08:21.044 ************************************ 00:08:21.044 04:03:29 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:21.303 * Looking for test storage... 00:08:21.303 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility 00:08:21.303 04:03:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:21.303 04:03:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2550663 00:08:21.303 04:03:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2550663 00:08:21.303 04:03:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:08:21.303 04:03:29 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2550663 ']' 00:08:21.303 04:03:29 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.303 04:03:29 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.303 04:03:29 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.303 04:03:29 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.303 04:03:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:21.303 [2024-07-23 04:03:30.022676] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:21.303 [2024-07-23 04:03:30.022804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2550663 ] 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:21.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.562 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:21.562 [2024-07-23 04:03:30.246862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.821 [2024-07-23 04:03:30.530938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.202 04:03:31 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:23.202 04:03:31 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:08:23.202 04:03:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:23.202 04:03:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:23.202 04:03:31 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.202 04:03:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:23.202 { 00:08:23.203 "filename": "/tmp/spdk_mem_dump.txt" 00:08:23.203 } 00:08:23.203 04:03:31 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.203 04:03:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:23.203 DPDK memory size 820.000000 MiB in 1 heap(s) 00:08:23.203 1 heaps totaling size 820.000000 MiB 00:08:23.203 size: 820.000000 MiB heap id: 0 00:08:23.203 end heaps---------- 00:08:23.203 8 mempools totaling size 598.116089 MiB 00:08:23.203 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:23.203 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:23.203 size: 84.521057 MiB name: bdev_io_2550663 00:08:23.203 size: 51.011292 MiB name: evtpool_2550663 00:08:23.203 size: 50.003479 MiB name: msgpool_2550663 00:08:23.203 size: 21.763794 MiB name: PDU_Pool 00:08:23.203 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:23.203 size: 0.026123 MiB name: Session_Pool 00:08:23.203 end mempools------- 00:08:23.203 201 memzones totaling size 4.176453 MiB 00:08:23.203 size: 1.000366 MiB name: RG_ring_0_2550663 00:08:23.203 size: 1.000366 MiB name: RG_ring_1_2550663 00:08:23.203 size: 1.000366 MiB name: RG_ring_4_2550663 00:08:23.203 size: 1.000366 MiB name: RG_ring_5_2550663 00:08:23.203 size: 0.125366 MiB name: RG_ring_2_2550663 00:08:23.203 size: 0.015991 MiB name: RG_ring_3_2550663 00:08:23.203 size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:08:23.203 size: 0.000305 MiB name: 0000:1a:01.0_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1a:01.1_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1a:01.2_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1a:01.3_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1a:01.4_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1a:01.5_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1a:01.6_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1a:01.7_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1a:02.0_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1a:02.1_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1a:02.2_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1a:02.3_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1a:02.4_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1a:02.5_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1a:02.6_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1a:02.7_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1c:01.0_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1c:01.1_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1c:01.2_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1c:01.3_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1c:01.4_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1c:01.5_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1c:01.6_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1c:01.7_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1c:02.0_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1c:02.1_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1c:02.2_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1c:02.3_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1c:02.4_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1c:02.5_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1c:02.6_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1c:02.7_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1e:01.0_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1e:01.1_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1e:01.2_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1e:01.3_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1e:01.4_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1e:01.5_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1e:01.6_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1e:01.7_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1e:02.0_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1e:02.1_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1e:02.2_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1e:02.3_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1e:02.4_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1e:02.5_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1e:02.6_qat 00:08:23.203 size: 0.000305 MiB name: 0000:1e:02.7_qat 00:08:23.203 size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_0 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_1 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_0 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_2 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_3 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_1 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_4 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_5 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_2 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_6 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_7 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_3 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_8 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_9 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_4 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_10 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_11 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_5 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_12 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_13 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_6 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_14 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_15 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_7 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_16 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_17 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_8 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_18 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_19 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_9 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_20 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_21 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_10 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_22 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_23 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_11 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_24 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_25 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_12 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_26 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_27 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_13 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_28 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_29 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_14 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_30 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_31 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_15 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_32 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_33 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_16 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_34 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_35 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_17 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_36 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_37 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_18 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_38 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_39 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_19 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_40 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_41 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_20 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_42 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_43 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_21 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_44 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_45 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_22 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_46 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_47 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_23 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_48 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_49 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_24 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_50 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_51 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_25 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_52 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_53 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_26 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_54 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_55 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_27 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_56 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_57 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_28 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_58 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_59 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_29 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_60 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_61 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_30 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_62 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_63 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_31 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_64 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_65 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_32 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_66 00:08:23.203 size: 0.000122 MiB name: rte_cryptodev_data_67 00:08:23.203 size: 0.000122 MiB name: rte_compressdev_data_33 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_68 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_69 00:08:23.204 size: 0.000122 MiB name: rte_compressdev_data_34 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_70 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_71 00:08:23.204 size: 0.000122 MiB name: rte_compressdev_data_35 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_72 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_73 00:08:23.204 size: 0.000122 MiB name: rte_compressdev_data_36 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_74 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_75 00:08:23.204 size: 0.000122 MiB name: rte_compressdev_data_37 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_76 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_77 00:08:23.204 size: 0.000122 MiB name: rte_compressdev_data_38 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_78 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_79 00:08:23.204 size: 0.000122 MiB name: rte_compressdev_data_39 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_80 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_81 00:08:23.204 size: 0.000122 MiB name: rte_compressdev_data_40 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_82 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_83 00:08:23.204 size: 0.000122 MiB name: rte_compressdev_data_41 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_84 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_85 00:08:23.204 size: 0.000122 MiB name: rte_compressdev_data_42 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_86 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_87 00:08:23.204 size: 0.000122 MiB name: rte_compressdev_data_43 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_88 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_89 00:08:23.204 size: 0.000122 MiB name: rte_compressdev_data_44 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_90 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_91 00:08:23.204 size: 0.000122 MiB name: rte_compressdev_data_45 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_92 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_93 00:08:23.204 size: 0.000122 MiB name: rte_compressdev_data_46 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_94 00:08:23.204 size: 0.000122 MiB name: rte_cryptodev_data_95 00:08:23.204 size: 0.000122 MiB name: rte_compressdev_data_47 00:08:23.204 size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:08:23.204 end memzones------- 00:08:23.204 04:03:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:08:23.204 heap id: 0 total size: 820.000000 MiB number of busy elements: 627 number of free elements: 17 00:08:23.204 list of free elements. size: 17.731079 MiB 00:08:23.204 element at address: 0x200000400000 with size: 1.999451 MiB 00:08:23.204 element at address: 0x200000800000 with size: 1.996887 MiB 00:08:23.204 element at address: 0x200007000000 with size: 1.995972 MiB 00:08:23.204 element at address: 0x20000b200000 with size: 1.995972 MiB 00:08:23.204 element at address: 0x200019100040 with size: 0.999939 MiB 00:08:23.204 element at address: 0x200019500040 with size: 0.999939 MiB 00:08:23.204 element at address: 0x200019900040 with size: 0.999939 MiB 00:08:23.204 element at address: 0x200019600000 with size: 0.999329 MiB 00:08:23.204 element at address: 0x200003e00000 with size: 0.996338 MiB 00:08:23.204 element at address: 0x200032200000 with size: 0.994324 MiB 00:08:23.204 element at address: 0x200018e00000 with size: 0.959656 MiB 00:08:23.204 element at address: 0x20001b000000 with size: 0.580994 MiB 00:08:23.204 element at address: 0x200019200000 with size: 0.491150 MiB 00:08:23.204 element at address: 0x200019a00000 with size: 0.485657 MiB 00:08:23.204 element at address: 0x200013800000 with size: 0.467651 MiB 00:08:23.204 element at address: 0x200028400000 with size: 0.395081 MiB 00:08:23.204 element at address: 0x200003a00000 with size: 0.372803 MiB 00:08:23.204 list of standard malloc elements. size: 199.935181 MiB 00:08:23.204 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:08:23.204 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:08:23.204 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:08:23.204 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:23.204 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:08:23.204 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:23.204 element at address: 0x200000207480 with size: 0.062683 MiB 00:08:23.204 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:23.204 element at address: 0x2000003239c0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x200000327740 with size: 0.004456 MiB 00:08:23.204 element at address: 0x20000032b4c0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x20000032f240 with size: 0.004456 MiB 00:08:23.204 element at address: 0x200000332fc0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x200000336d40 with size: 0.004456 MiB 00:08:23.204 element at address: 0x20000033aac0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x20000033e840 with size: 0.004456 MiB 00:08:23.204 element at address: 0x2000003425c0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x200000346340 with size: 0.004456 MiB 00:08:23.204 element at address: 0x20000034a0c0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x20000034de40 with size: 0.004456 MiB 00:08:23.204 element at address: 0x200000351bc0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x200000355940 with size: 0.004456 MiB 00:08:23.204 element at address: 0x2000003596c0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x20000035d440 with size: 0.004456 MiB 00:08:23.204 element at address: 0x2000003611c0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x200000364f40 with size: 0.004456 MiB 00:08:23.204 element at address: 0x200000368cc0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x20000036ca40 with size: 0.004456 MiB 00:08:23.204 element at address: 0x2000003707c0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x200000374540 with size: 0.004456 MiB 00:08:23.204 element at address: 0x2000003782c0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x20000037c040 with size: 0.004456 MiB 00:08:23.204 element at address: 0x20000037fdc0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x200000383b40 with size: 0.004456 MiB 00:08:23.204 element at address: 0x2000003878c0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x20000038b640 with size: 0.004456 MiB 00:08:23.204 element at address: 0x20000038f3c0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x200000393140 with size: 0.004456 MiB 00:08:23.204 element at address: 0x200000396ec0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x20000039ac40 with size: 0.004456 MiB 00:08:23.204 element at address: 0x20000039e9c0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x2000003a2740 with size: 0.004456 MiB 00:08:23.204 element at address: 0x2000003a64c0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x2000003aa240 with size: 0.004456 MiB 00:08:23.204 element at address: 0x2000003adfc0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x2000003b1d40 with size: 0.004456 MiB 00:08:23.204 element at address: 0x2000003b5ac0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x2000003b9840 with size: 0.004456 MiB 00:08:23.204 element at address: 0x2000003bd5c0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x2000003c1340 with size: 0.004456 MiB 00:08:23.204 element at address: 0x2000003c50c0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x2000003c8e40 with size: 0.004456 MiB 00:08:23.204 element at address: 0x2000003ccbc0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x2000003d0940 with size: 0.004456 MiB 00:08:23.204 element at address: 0x2000003d46c0 with size: 0.004456 MiB 00:08:23.204 element at address: 0x2000003d8c40 with size: 0.004456 MiB 00:08:23.204 element at address: 0x200000321840 with size: 0.004089 MiB 00:08:23.204 element at address: 0x200000322900 with size: 0.004089 MiB 00:08:23.204 element at address: 0x2000003255c0 with size: 0.004089 MiB 00:08:23.204 element at address: 0x200000326680 with size: 0.004089 MiB 00:08:23.204 element at address: 0x200000329340 with size: 0.004089 MiB 00:08:23.204 element at address: 0x20000032a400 with size: 0.004089 MiB 00:08:23.204 element at address: 0x20000032d0c0 with size: 0.004089 MiB 00:08:23.204 element at address: 0x20000032e180 with size: 0.004089 MiB 00:08:23.204 element at address: 0x200000330e40 with size: 0.004089 MiB 00:08:23.204 element at address: 0x200000331f00 with size: 0.004089 MiB 00:08:23.204 element at address: 0x200000334bc0 with size: 0.004089 MiB 00:08:23.204 element at address: 0x200000335c80 with size: 0.004089 MiB 00:08:23.204 element at address: 0x200000338940 with size: 0.004089 MiB 00:08:23.204 element at address: 0x200000339a00 with size: 0.004089 MiB 00:08:23.204 element at address: 0x20000033c6c0 with size: 0.004089 MiB 00:08:23.204 element at address: 0x20000033d780 with size: 0.004089 MiB 00:08:23.204 element at address: 0x200000340440 with size: 0.004089 MiB 00:08:23.204 element at address: 0x200000341500 with size: 0.004089 MiB 00:08:23.204 element at address: 0x2000003441c0 with size: 0.004089 MiB 00:08:23.204 element at address: 0x200000345280 with size: 0.004089 MiB 00:08:23.204 element at address: 0x200000347f40 with size: 0.004089 MiB 00:08:23.204 element at address: 0x200000349000 with size: 0.004089 MiB 00:08:23.204 element at address: 0x20000034bcc0 with size: 0.004089 MiB 00:08:23.204 element at address: 0x20000034cd80 with size: 0.004089 MiB 00:08:23.204 element at address: 0x20000034fa40 with size: 0.004089 MiB 00:08:23.204 element at address: 0x200000350b00 with size: 0.004089 MiB 00:08:23.204 element at address: 0x2000003537c0 with size: 0.004089 MiB 00:08:23.204 element at address: 0x200000354880 with size: 0.004089 MiB 00:08:23.204 element at address: 0x200000357540 with size: 0.004089 MiB 00:08:23.204 element at address: 0x200000358600 with size: 0.004089 MiB 00:08:23.204 element at address: 0x20000035b2c0 with size: 0.004089 MiB 00:08:23.205 element at address: 0x20000035c380 with size: 0.004089 MiB 00:08:23.205 element at address: 0x20000035f040 with size: 0.004089 MiB 00:08:23.205 element at address: 0x200000360100 with size: 0.004089 MiB 00:08:23.205 element at address: 0x200000362dc0 with size: 0.004089 MiB 00:08:23.205 element at address: 0x200000363e80 with size: 0.004089 MiB 00:08:23.205 element at address: 0x200000366b40 with size: 0.004089 MiB 00:08:23.205 element at address: 0x200000367c00 with size: 0.004089 MiB 00:08:23.205 element at address: 0x20000036a8c0 with size: 0.004089 MiB 00:08:23.205 element at address: 0x20000036b980 with size: 0.004089 MiB 00:08:23.205 element at address: 0x20000036e640 with size: 0.004089 MiB 00:08:23.205 element at address: 0x20000036f700 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003723c0 with size: 0.004089 MiB 00:08:23.205 element at address: 0x200000373480 with size: 0.004089 MiB 00:08:23.205 element at address: 0x200000376140 with size: 0.004089 MiB 00:08:23.205 element at address: 0x200000377200 with size: 0.004089 MiB 00:08:23.205 element at address: 0x200000379ec0 with size: 0.004089 MiB 00:08:23.205 element at address: 0x20000037af80 with size: 0.004089 MiB 00:08:23.205 element at address: 0x20000037dc40 with size: 0.004089 MiB 00:08:23.205 element at address: 0x20000037ed00 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003819c0 with size: 0.004089 MiB 00:08:23.205 element at address: 0x200000382a80 with size: 0.004089 MiB 00:08:23.205 element at address: 0x200000385740 with size: 0.004089 MiB 00:08:23.205 element at address: 0x200000386800 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003894c0 with size: 0.004089 MiB 00:08:23.205 element at address: 0x20000038a580 with size: 0.004089 MiB 00:08:23.205 element at address: 0x20000038d240 with size: 0.004089 MiB 00:08:23.205 element at address: 0x20000038e300 with size: 0.004089 MiB 00:08:23.205 element at address: 0x200000390fc0 with size: 0.004089 MiB 00:08:23.205 element at address: 0x200000392080 with size: 0.004089 MiB 00:08:23.205 element at address: 0x200000394d40 with size: 0.004089 MiB 00:08:23.205 element at address: 0x200000395e00 with size: 0.004089 MiB 00:08:23.205 element at address: 0x200000398ac0 with size: 0.004089 MiB 00:08:23.205 element at address: 0x200000399b80 with size: 0.004089 MiB 00:08:23.205 element at address: 0x20000039c840 with size: 0.004089 MiB 00:08:23.205 element at address: 0x20000039d900 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003a05c0 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003a1680 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003a4340 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003a5400 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003a80c0 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003a9180 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003abe40 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003acf00 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003afbc0 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003b0c80 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003b3940 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003b4a00 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003b76c0 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003b8780 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003bb440 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003bc500 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003bf1c0 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003c0280 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003c2f40 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003c4000 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003c6cc0 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003c7d80 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003caa40 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003cbb00 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003ce7c0 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003cf880 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003d2540 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003d3600 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003d6ac0 with size: 0.004089 MiB 00:08:23.205 element at address: 0x2000003d7b80 with size: 0.004089 MiB 00:08:23.205 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:08:23.205 element at address: 0x200000207300 with size: 0.000366 MiB 00:08:23.205 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:08:23.205 element at address: 0x200000200000 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000200100 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000200200 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000200300 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000200400 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000200500 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000200600 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000200700 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000200800 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000200900 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000200a00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000200b00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000200c00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000200d00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000200e00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000200f00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000201000 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000201100 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000201200 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000201300 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000201400 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000201500 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000201600 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000201700 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000201800 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000201900 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000201a00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000201b00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000201c00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000201d00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000201e00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000201f00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000202000 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000202100 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000202200 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000202300 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000202400 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000202500 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000202600 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000202700 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000202800 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000202900 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000202a00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000202b00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000202c00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000202d00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000202e00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000202f00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000203000 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000203100 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000203200 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000203300 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000203400 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000203500 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000203600 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000203700 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000203800 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000203900 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000203a00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000203b00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000203c00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000203d00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000203e00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000203f00 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000204000 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000204100 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000204200 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000204300 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000204400 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000204500 with size: 0.000244 MiB 00:08:23.205 element at address: 0x200000204600 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000204700 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000204800 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000204900 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000204a00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000204b00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000204c00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000204d00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000204e00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000204f00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000205000 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000205100 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000205200 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000205300 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000205400 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000205500 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000205600 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000205700 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000205800 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000205900 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000205a00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000205b00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000205c00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000205d00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000205e00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000205f00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000206000 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000206100 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000206200 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000206300 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000206400 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000206500 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000206600 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000206700 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000206800 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000206900 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000206a00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000206b00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000206c00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000206d00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000206e00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000206f00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000207000 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000207100 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000207200 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000217540 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000217640 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000217740 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000217840 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000217940 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000217a40 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000217b40 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000217c40 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000217d40 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000217e40 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000217f40 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000218040 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000218140 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000218240 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000218340 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000218440 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021c780 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021c880 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021c980 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021ca80 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021cb80 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021cc80 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021cd80 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021ce80 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021cf80 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021d080 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021d180 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021d280 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021d380 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021d480 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021d580 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021d680 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021d780 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021d880 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021db00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021dc00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021dd00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021de00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021df00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021e000 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021e100 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021e200 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021e300 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021e400 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021e500 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021e600 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021e700 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021e800 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021e900 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021ea00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000021eb00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000320d80 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000320e80 with size: 0.000244 MiB 00:08:23.206 element at address: 0x2000003210c0 with size: 0.000244 MiB 00:08:23.206 element at address: 0x2000003211c0 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000321400 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000324c00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000324e40 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000324f40 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000325180 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000328980 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000328bc0 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000328cc0 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000328f00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000032c700 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000032c940 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000032ca40 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000032cc80 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000330480 with size: 0.000244 MiB 00:08:23.206 element at address: 0x2000003306c0 with size: 0.000244 MiB 00:08:23.206 element at address: 0x2000003307c0 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000330a00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000334200 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000334440 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000334540 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000334780 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000337f80 with size: 0.000244 MiB 00:08:23.206 element at address: 0x2000003381c0 with size: 0.000244 MiB 00:08:23.206 element at address: 0x2000003382c0 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000338500 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000033bd00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000033bf40 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000033c040 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000033c280 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000033fa80 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000033fcc0 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000033fdc0 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000340000 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000343800 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000343a40 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000343b40 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000343d80 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000347580 with size: 0.000244 MiB 00:08:23.206 element at address: 0x2000003477c0 with size: 0.000244 MiB 00:08:23.206 element at address: 0x2000003478c0 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000347b00 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000034b300 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000034b540 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000034b640 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000034b880 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000034f080 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000034f2c0 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000034f3c0 with size: 0.000244 MiB 00:08:23.206 element at address: 0x20000034f600 with size: 0.000244 MiB 00:08:23.206 element at address: 0x200000352e00 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000353040 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000353140 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000353380 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000356b80 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000356dc0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000356ec0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000357100 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000035a900 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000035ab40 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000035ac40 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000035ae80 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000035e680 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000035e8c0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000035e9c0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000035ec00 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000362400 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000362640 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000362740 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000362980 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000366180 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003663c0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003664c0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000366700 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000369f00 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000036a140 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000036a240 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000036a480 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000036dc80 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000036dec0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000036dfc0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000036e200 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000371a00 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000371c40 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000371d40 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000371f80 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000375780 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003759c0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000375ac0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000375d00 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000379500 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000379740 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000379840 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000379a80 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000037d280 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000037d4c0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000037d5c0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000037d800 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000381000 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000381240 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000381340 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000381580 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000384d80 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000384fc0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003850c0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000385300 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000388b00 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000388d40 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000388e40 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000389080 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000038c880 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000038cac0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000038cbc0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000038ce00 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000390600 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000390840 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000390940 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000390b80 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000394380 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003945c0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003946c0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000394900 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000398100 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000398340 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000398440 with size: 0.000244 MiB 00:08:23.207 element at address: 0x200000398680 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000039be80 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000039c0c0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000039c1c0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000039c400 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000039fc00 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000039fe40 with size: 0.000244 MiB 00:08:23.207 element at address: 0x20000039ff40 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003a0180 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003a3980 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003a3bc0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003a3cc0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003a3f00 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003a7700 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003a7940 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003a7a40 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003a7c80 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003ab480 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003ab6c0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003ab7c0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003aba00 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003af200 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003af440 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003af540 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003af780 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003b2f80 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003b31c0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003b32c0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003b3500 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003b6d00 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003b6f40 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003b7040 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003b7280 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003baa80 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003bacc0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003badc0 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003bb000 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003be800 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003bea40 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003beb40 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003bed80 with size: 0.000244 MiB 00:08:23.207 element at address: 0x2000003c2580 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000003c27c0 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000003c28c0 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000003c2b00 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000003c6300 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000003c6540 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000003c6640 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000003c6880 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000003ca080 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000003ca2c0 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000003ca3c0 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000003ca600 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000003cde00 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000003ce040 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000003ce140 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000003ce380 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000003d1b80 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000003d1dc0 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000003d1ec0 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000003d2100 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000003d5a00 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000003d61c0 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000003d62c0 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000003d6680 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:08:23.208 element at address: 0x200013877b80 with size: 0.000244 MiB 00:08:23.208 element at address: 0x200013877c80 with size: 0.000244 MiB 00:08:23.208 element at address: 0x200013877d80 with size: 0.000244 MiB 00:08:23.208 element at address: 0x200013877e80 with size: 0.000244 MiB 00:08:23.208 element at address: 0x200013877f80 with size: 0.000244 MiB 00:08:23.208 element at address: 0x200013878080 with size: 0.000244 MiB 00:08:23.208 element at address: 0x200013878180 with size: 0.000244 MiB 00:08:23.208 element at address: 0x200013878280 with size: 0.000244 MiB 00:08:23.208 element at address: 0x200013878380 with size: 0.000244 MiB 00:08:23.208 element at address: 0x200013878480 with size: 0.000244 MiB 00:08:23.208 element at address: 0x200013878580 with size: 0.000244 MiB 00:08:23.208 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:08:23.208 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:08:23.208 element at address: 0x200028465240 with size: 0.000244 MiB 00:08:23.208 element at address: 0x200028465340 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846c000 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846c280 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846c380 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846c480 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846c580 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846c680 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846c780 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846c880 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846c980 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846d080 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846d180 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846d280 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846d380 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846d480 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846d580 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846d680 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846d780 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846d880 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846d980 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846da80 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846db80 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846de80 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846df80 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846e080 with size: 0.000244 MiB 00:08:23.208 element at address: 0x20002846e180 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846e280 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846e380 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846e480 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846e580 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846e680 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846e780 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846e880 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846e980 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846f080 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846f180 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846f280 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846f380 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846f480 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846f580 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846f680 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846f780 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846f880 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846f980 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:08:23.209 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:08:23.209 list of memzone associated elements. size: 602.333740 MiB 00:08:23.209 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:08:23.209 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:23.209 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:08:23.209 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:23.209 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:08:23.209 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2550663_0 00:08:23.209 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:08:23.209 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2550663_0 00:08:23.209 element at address: 0x200003fff340 with size: 48.003113 MiB 00:08:23.209 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2550663_0 00:08:23.209 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:08:23.209 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:23.209 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:08:23.209 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:23.209 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:08:23.209 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2550663 00:08:23.209 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:08:23.209 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2550663 00:08:23.209 element at address: 0x20000021ec00 with size: 1.008179 MiB 00:08:23.209 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2550663 00:08:23.209 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:08:23.209 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:23.209 element at address: 0x200019abc780 with size: 1.008179 MiB 00:08:23.209 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:23.209 element at address: 0x200018efde00 with size: 1.008179 MiB 00:08:23.209 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:23.209 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:08:23.209 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:23.209 element at address: 0x200003eff100 with size: 1.000549 MiB 00:08:23.209 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2550663 00:08:23.209 element at address: 0x200003affb80 with size: 1.000549 MiB 00:08:23.209 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2550663 00:08:23.209 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:08:23.209 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2550663 00:08:23.209 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:08:23.209 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2550663 00:08:23.209 element at address: 0x200003a5f700 with size: 0.500549 MiB 00:08:23.209 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2550663 00:08:23.209 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:08:23.209 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:23.209 element at address: 0x200013878680 with size: 0.500549 MiB 00:08:23.209 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:23.209 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:08:23.209 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:23.209 element at address: 0x200003adf940 with size: 0.125549 MiB 00:08:23.209 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2550663 00:08:23.209 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:08:23.209 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:23.209 element at address: 0x200028465440 with size: 0.023804 MiB 00:08:23.209 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:23.209 element at address: 0x200000218540 with size: 0.016174 MiB 00:08:23.209 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2550663 00:08:23.209 element at address: 0x20002846b5c0 with size: 0.002502 MiB 00:08:23.209 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:23.209 element at address: 0x2000003d5c40 with size: 0.001343 MiB 00:08:23.209 associated memzone info: size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:08:23.209 element at address: 0x2000003d68c0 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.0_qat 00:08:23.209 element at address: 0x2000003d2340 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.1_qat 00:08:23.209 element at address: 0x2000003ce5c0 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.2_qat 00:08:23.209 element at address: 0x2000003ca840 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.3_qat 00:08:23.209 element at address: 0x2000003c6ac0 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.4_qat 00:08:23.209 element at address: 0x2000003c2d40 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.5_qat 00:08:23.209 element at address: 0x2000003befc0 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.6_qat 00:08:23.209 element at address: 0x2000003bb240 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.7_qat 00:08:23.209 element at address: 0x2000003b74c0 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.0_qat 00:08:23.209 element at address: 0x2000003b3740 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.1_qat 00:08:23.209 element at address: 0x2000003af9c0 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.2_qat 00:08:23.209 element at address: 0x2000003abc40 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.3_qat 00:08:23.209 element at address: 0x2000003a7ec0 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.4_qat 00:08:23.209 element at address: 0x2000003a4140 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.5_qat 00:08:23.209 element at address: 0x2000003a03c0 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.6_qat 00:08:23.209 element at address: 0x20000039c640 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.7_qat 00:08:23.209 element at address: 0x2000003988c0 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.0_qat 00:08:23.209 element at address: 0x200000394b40 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.1_qat 00:08:23.209 element at address: 0x200000390dc0 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.2_qat 00:08:23.209 element at address: 0x20000038d040 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.3_qat 00:08:23.209 element at address: 0x2000003892c0 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.4_qat 00:08:23.209 element at address: 0x200000385540 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.5_qat 00:08:23.209 element at address: 0x2000003817c0 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.6_qat 00:08:23.209 element at address: 0x20000037da40 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.7_qat 00:08:23.209 element at address: 0x200000379cc0 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.0_qat 00:08:23.209 element at address: 0x200000375f40 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.1_qat 00:08:23.209 element at address: 0x2000003721c0 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.2_qat 00:08:23.209 element at address: 0x20000036e440 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.3_qat 00:08:23.209 element at address: 0x20000036a6c0 with size: 0.000488 MiB 00:08:23.209 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.4_qat 00:08:23.209 element at address: 0x200000366940 with size: 0.000488 MiB 00:08:23.210 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.5_qat 00:08:23.210 element at address: 0x200000362bc0 with size: 0.000488 MiB 00:08:23.210 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.6_qat 00:08:23.210 element at address: 0x20000035ee40 with size: 0.000488 MiB 00:08:23.210 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.7_qat 00:08:23.210 element at address: 0x20000035b0c0 with size: 0.000488 MiB 00:08:23.210 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.0_qat 00:08:23.210 element at address: 0x200000357340 with size: 0.000488 MiB 00:08:23.210 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.1_qat 00:08:23.210 element at address: 0x2000003535c0 with size: 0.000488 MiB 00:08:23.210 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.2_qat 00:08:23.210 element at address: 0x20000034f840 with size: 0.000488 MiB 00:08:23.210 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.3_qat 00:08:23.210 element at address: 0x20000034bac0 with size: 0.000488 MiB 00:08:23.210 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.4_qat 00:08:23.210 element at address: 0x200000347d40 with size: 0.000488 MiB 00:08:23.210 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.5_qat 00:08:23.210 element at address: 0x200000343fc0 with size: 0.000488 MiB 00:08:23.210 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.6_qat 00:08:23.210 element at address: 0x200000340240 with size: 0.000488 MiB 00:08:23.210 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.7_qat 00:08:23.210 element at address: 0x20000033c4c0 with size: 0.000488 MiB 00:08:23.210 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.0_qat 00:08:23.210 element at address: 0x200000338740 with size: 0.000488 MiB 00:08:23.210 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.1_qat 00:08:23.210 element at address: 0x2000003349c0 with size: 0.000488 MiB 00:08:23.210 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.2_qat 00:08:23.210 element at address: 0x200000330c40 with size: 0.000488 MiB 00:08:23.210 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.3_qat 00:08:23.210 element at address: 0x20000032cec0 with size: 0.000488 MiB 00:08:23.210 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.4_qat 00:08:23.210 element at address: 0x200000329140 with size: 0.000488 MiB 00:08:23.210 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.5_qat 00:08:23.210 element at address: 0x2000003253c0 with size: 0.000488 MiB 00:08:23.210 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.6_qat 00:08:23.210 element at address: 0x200000321640 with size: 0.000488 MiB 00:08:23.210 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.7_qat 00:08:23.210 element at address: 0x2000003d6500 with size: 0.000366 MiB 00:08:23.210 associated memzone info: size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:08:23.210 element at address: 0x20000021d980 with size: 0.000366 MiB 00:08:23.210 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2550663 00:08:23.210 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:08:23.210 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2550663 00:08:23.210 element at address: 0x20002846c100 with size: 0.000366 MiB 00:08:23.210 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:23.210 element at address: 0x2000003d6780 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_0 00:08:23.210 element at address: 0x2000003d63c0 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_1 00:08:23.210 element at address: 0x2000003d5b00 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_0 00:08:23.210 element at address: 0x2000003d2200 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_2 00:08:23.210 element at address: 0x2000003d1fc0 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_3 00:08:23.210 element at address: 0x2000003d1c80 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_1 00:08:23.210 element at address: 0x2000003ce480 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_4 00:08:23.210 element at address: 0x2000003ce240 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_5 00:08:23.210 element at address: 0x2000003cdf00 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_2 00:08:23.210 element at address: 0x2000003ca700 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_6 00:08:23.210 element at address: 0x2000003ca4c0 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_7 00:08:23.210 element at address: 0x2000003ca180 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_3 00:08:23.210 element at address: 0x2000003c6980 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_8 00:08:23.210 element at address: 0x2000003c6740 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_9 00:08:23.210 element at address: 0x2000003c6400 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_4 00:08:23.210 element at address: 0x2000003c2c00 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_10 00:08:23.210 element at address: 0x2000003c29c0 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_11 00:08:23.210 element at address: 0x2000003c2680 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_5 00:08:23.210 element at address: 0x2000003bee80 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_12 00:08:23.210 element at address: 0x2000003bec40 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_13 00:08:23.210 element at address: 0x2000003be900 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_6 00:08:23.210 element at address: 0x2000003bb100 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_14 00:08:23.210 element at address: 0x2000003baec0 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_15 00:08:23.210 element at address: 0x2000003bab80 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_7 00:08:23.210 element at address: 0x2000003b7380 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_16 00:08:23.210 element at address: 0x2000003b7140 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_17 00:08:23.210 element at address: 0x2000003b6e00 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_8 00:08:23.210 element at address: 0x2000003b3600 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_18 00:08:23.210 element at address: 0x2000003b33c0 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_19 00:08:23.210 element at address: 0x2000003b3080 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_9 00:08:23.210 element at address: 0x2000003af880 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_20 00:08:23.210 element at address: 0x2000003af640 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_21 00:08:23.210 element at address: 0x2000003af300 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_10 00:08:23.210 element at address: 0x2000003abb00 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_22 00:08:23.210 element at address: 0x2000003ab8c0 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_23 00:08:23.210 element at address: 0x2000003ab580 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_11 00:08:23.210 element at address: 0x2000003a7d80 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_24 00:08:23.210 element at address: 0x2000003a7b40 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_25 00:08:23.210 element at address: 0x2000003a7800 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_12 00:08:23.210 element at address: 0x2000003a4000 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_26 00:08:23.210 element at address: 0x2000003a3dc0 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_27 00:08:23.210 element at address: 0x2000003a3a80 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_13 00:08:23.210 element at address: 0x2000003a0280 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_28 00:08:23.210 element at address: 0x2000003a0040 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_29 00:08:23.210 element at address: 0x20000039fd00 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_14 00:08:23.210 element at address: 0x20000039c500 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_30 00:08:23.210 element at address: 0x20000039c2c0 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_31 00:08:23.210 element at address: 0x20000039bf80 with size: 0.000305 MiB 00:08:23.210 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_15 00:08:23.210 element at address: 0x200000398780 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_32 00:08:23.211 element at address: 0x200000398540 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_33 00:08:23.211 element at address: 0x200000398200 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_16 00:08:23.211 element at address: 0x200000394a00 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_34 00:08:23.211 element at address: 0x2000003947c0 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_35 00:08:23.211 element at address: 0x200000394480 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_17 00:08:23.211 element at address: 0x200000390c80 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_36 00:08:23.211 element at address: 0x200000390a40 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_37 00:08:23.211 element at address: 0x200000390700 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_18 00:08:23.211 element at address: 0x20000038cf00 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_38 00:08:23.211 element at address: 0x20000038ccc0 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_39 00:08:23.211 element at address: 0x20000038c980 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_19 00:08:23.211 element at address: 0x200000389180 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_40 00:08:23.211 element at address: 0x200000388f40 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_41 00:08:23.211 element at address: 0x200000388c00 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_20 00:08:23.211 element at address: 0x200000385400 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_42 00:08:23.211 element at address: 0x2000003851c0 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_43 00:08:23.211 element at address: 0x200000384e80 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_21 00:08:23.211 element at address: 0x200000381680 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_44 00:08:23.211 element at address: 0x200000381440 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_45 00:08:23.211 element at address: 0x200000381100 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_22 00:08:23.211 element at address: 0x20000037d900 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_46 00:08:23.211 element at address: 0x20000037d6c0 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_47 00:08:23.211 element at address: 0x20000037d380 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_23 00:08:23.211 element at address: 0x200000379b80 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_48 00:08:23.211 element at address: 0x200000379940 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_49 00:08:23.211 element at address: 0x200000379600 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_24 00:08:23.211 element at address: 0x200000375e00 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_50 00:08:23.211 element at address: 0x200000375bc0 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_51 00:08:23.211 element at address: 0x200000375880 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_25 00:08:23.211 element at address: 0x200000372080 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_52 00:08:23.211 element at address: 0x200000371e40 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_53 00:08:23.211 element at address: 0x200000371b00 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_26 00:08:23.211 element at address: 0x20000036e300 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_54 00:08:23.211 element at address: 0x20000036e0c0 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_55 00:08:23.211 element at address: 0x20000036dd80 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_27 00:08:23.211 element at address: 0x20000036a580 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_56 00:08:23.211 element at address: 0x20000036a340 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_57 00:08:23.211 element at address: 0x20000036a000 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_28 00:08:23.211 element at address: 0x200000366800 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_58 00:08:23.211 element at address: 0x2000003665c0 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_59 00:08:23.211 element at address: 0x200000366280 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_29 00:08:23.211 element at address: 0x200000362a80 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_60 00:08:23.211 element at address: 0x200000362840 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_61 00:08:23.211 element at address: 0x200000362500 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_30 00:08:23.211 element at address: 0x20000035ed00 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_62 00:08:23.211 element at address: 0x20000035eac0 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_63 00:08:23.211 element at address: 0x20000035e780 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_31 00:08:23.211 element at address: 0x20000035af80 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_64 00:08:23.211 element at address: 0x20000035ad40 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_65 00:08:23.211 element at address: 0x20000035aa00 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_32 00:08:23.211 element at address: 0x200000357200 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_66 00:08:23.211 element at address: 0x200000356fc0 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_67 00:08:23.211 element at address: 0x200000356c80 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_33 00:08:23.211 element at address: 0x200000353480 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_68 00:08:23.211 element at address: 0x200000353240 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_69 00:08:23.211 element at address: 0x200000352f00 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_34 00:08:23.211 element at address: 0x20000034f700 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_70 00:08:23.211 element at address: 0x20000034f4c0 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_71 00:08:23.211 element at address: 0x20000034f180 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_35 00:08:23.211 element at address: 0x20000034b980 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_72 00:08:23.211 element at address: 0x20000034b740 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_73 00:08:23.211 element at address: 0x20000034b400 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_36 00:08:23.211 element at address: 0x200000347c00 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_74 00:08:23.211 element at address: 0x2000003479c0 with size: 0.000305 MiB 00:08:23.211 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_75 00:08:23.212 element at address: 0x200000347680 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_37 00:08:23.212 element at address: 0x200000343e80 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_76 00:08:23.212 element at address: 0x200000343c40 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_77 00:08:23.212 element at address: 0x200000343900 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_38 00:08:23.212 element at address: 0x200000340100 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_78 00:08:23.212 element at address: 0x20000033fec0 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_79 00:08:23.212 element at address: 0x20000033fb80 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_39 00:08:23.212 element at address: 0x20000033c380 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_80 00:08:23.212 element at address: 0x20000033c140 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_81 00:08:23.212 element at address: 0x20000033be00 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_40 00:08:23.212 element at address: 0x200000338600 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_82 00:08:23.212 element at address: 0x2000003383c0 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_83 00:08:23.212 element at address: 0x200000338080 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_41 00:08:23.212 element at address: 0x200000334880 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_84 00:08:23.212 element at address: 0x200000334640 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_85 00:08:23.212 element at address: 0x200000334300 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_42 00:08:23.212 element at address: 0x200000330b00 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_86 00:08:23.212 element at address: 0x2000003308c0 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_87 00:08:23.212 element at address: 0x200000330580 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_43 00:08:23.212 element at address: 0x20000032cd80 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_88 00:08:23.212 element at address: 0x20000032cb40 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_89 00:08:23.212 element at address: 0x20000032c800 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_44 00:08:23.212 element at address: 0x200000329000 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_90 00:08:23.212 element at address: 0x200000328dc0 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_91 00:08:23.212 element at address: 0x200000328a80 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_45 00:08:23.212 element at address: 0x200000325280 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_92 00:08:23.212 element at address: 0x200000325040 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_93 00:08:23.212 element at address: 0x200000324d00 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_46 00:08:23.212 element at address: 0x200000321500 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_94 00:08:23.212 element at address: 0x2000003212c0 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_95 00:08:23.212 element at address: 0x200000320f80 with size: 0.000305 MiB 00:08:23.212 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_47 00:08:23.212 element at address: 0x2000003d5900 with size: 0.000244 MiB 00:08:23.212 associated memzone info: size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:08:23.212 04:03:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:23.212 04:03:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2550663 00:08:23.212 04:03:31 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2550663 ']' 00:08:23.212 04:03:31 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2550663 00:08:23.212 04:03:31 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:08:23.212 04:03:31 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:23.212 04:03:31 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2550663 00:08:23.212 04:03:31 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:23.212 04:03:31 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:23.212 04:03:31 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2550663' 00:08:23.212 killing process with pid 2550663 00:08:23.212 04:03:31 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2550663 00:08:23.212 04:03:31 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2550663 00:08:26.500 00:08:26.500 real 0m5.465s 00:08:26.500 user 0m5.416s 00:08:26.500 sys 0m0.734s 00:08:26.500 04:03:35 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.500 04:03:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:26.500 ************************************ 00:08:26.500 END TEST dpdk_mem_utility 00:08:26.500 ************************************ 00:08:26.759 04:03:35 -- common/autotest_common.sh@1142 -- # return 0 00:08:26.759 04:03:35 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:08:26.759 04:03:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:26.760 04:03:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.760 04:03:35 -- common/autotest_common.sh@10 -- # set +x 00:08:26.760 ************************************ 00:08:26.760 START TEST event 00:08:26.760 ************************************ 00:08:26.760 04:03:35 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:08:26.760 * Looking for test storage... 00:08:26.760 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event 00:08:26.760 04:03:35 event -- event/event.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:08:26.760 04:03:35 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:26.760 04:03:35 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:26.760 04:03:35 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:26.760 04:03:35 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.760 04:03:35 event -- common/autotest_common.sh@10 -- # set +x 00:08:26.760 ************************************ 00:08:26.760 START TEST event_perf 00:08:26.760 ************************************ 00:08:26.760 04:03:35 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:26.760 Running I/O for 1 seconds...[2024-07-23 04:03:35.523244] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:26.760 [2024-07-23 04:03:35.523320] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2551707 ] 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:27.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.019 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:27.019 [2024-07-23 04:03:35.717674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:27.278 [2024-07-23 04:03:36.006888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.278 [2024-07-23 04:03:36.006970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.278 [2024-07-23 04:03:36.007033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.278 [2024-07-23 04:03:36.007040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:29.214 Running I/O for 1 seconds... 00:08:29.214 lcore 0: 175481 00:08:29.214 lcore 1: 175480 00:08:29.214 lcore 2: 175480 00:08:29.214 lcore 3: 175480 00:08:29.214 done. 00:08:29.214 00:08:29.214 real 0m2.114s 00:08:29.214 user 0m4.889s 00:08:29.214 sys 0m0.215s 00:08:29.214 04:03:37 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.214 04:03:37 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:29.214 ************************************ 00:08:29.214 END TEST event_perf 00:08:29.214 ************************************ 00:08:29.214 04:03:37 event -- common/autotest_common.sh@1142 -- # return 0 00:08:29.214 04:03:37 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:29.214 04:03:37 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:29.214 04:03:37 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.214 04:03:37 event -- common/autotest_common.sh@10 -- # set +x 00:08:29.214 ************************************ 00:08:29.214 START TEST event_reactor 00:08:29.214 ************************************ 00:08:29.214 04:03:37 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:29.214 [2024-07-23 04:03:37.723322] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:29.214 [2024-07-23 04:03:37.723429] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2552059 ] 00:08:29.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.214 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:29.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.214 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:29.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.214 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:29.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.214 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:29.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:29.215 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.215 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:29.215 [2024-07-23 04:03:37.950088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.472 [2024-07-23 04:03:38.224542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.375 test_start 00:08:31.375 oneshot 00:08:31.375 tick 100 00:08:31.375 tick 100 00:08:31.375 tick 250 00:08:31.375 tick 100 00:08:31.375 tick 100 00:08:31.375 tick 100 00:08:31.375 tick 250 00:08:31.375 tick 500 00:08:31.375 tick 100 00:08:31.375 tick 100 00:08:31.375 tick 250 00:08:31.375 tick 100 00:08:31.375 tick 100 00:08:31.375 test_end 00:08:31.375 00:08:31.375 real 0m2.089s 00:08:31.375 user 0m1.848s 00:08:31.375 sys 0m0.230s 00:08:31.375 04:03:39 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.375 04:03:39 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:31.375 ************************************ 00:08:31.375 END TEST event_reactor 00:08:31.375 ************************************ 00:08:31.375 04:03:39 event -- common/autotest_common.sh@1142 -- # return 0 00:08:31.375 04:03:39 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:31.375 04:03:39 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:31.375 04:03:39 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.375 04:03:39 event -- common/autotest_common.sh@10 -- # set +x 00:08:31.375 ************************************ 00:08:31.375 START TEST event_reactor_perf 00:08:31.375 ************************************ 00:08:31.375 04:03:39 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:31.375 [2024-07-23 04:03:39.898656] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:31.375 [2024-07-23 04:03:39.898763] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2552536 ] 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:31.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.375 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:31.375 [2024-07-23 04:03:40.124594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.634 [2024-07-23 04:03:40.389858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.535 test_start 00:08:33.535 test_end 00:08:33.535 Performance: 274227 events per second 00:08:33.536 00:08:33.536 real 0m2.082s 00:08:33.536 user 0m1.839s 00:08:33.536 sys 0m0.232s 00:08:33.536 04:03:41 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.536 04:03:41 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:33.536 ************************************ 00:08:33.536 END TEST event_reactor_perf 00:08:33.536 ************************************ 00:08:33.536 04:03:41 event -- common/autotest_common.sh@1142 -- # return 0 00:08:33.536 04:03:41 event -- event/event.sh@49 -- # uname -s 00:08:33.536 04:03:41 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:33.536 04:03:41 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:33.536 04:03:41 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:33.536 04:03:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.536 04:03:41 event -- common/autotest_common.sh@10 -- # set +x 00:08:33.536 ************************************ 00:08:33.536 START TEST event_scheduler 00:08:33.536 ************************************ 00:08:33.536 04:03:42 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:33.536 * Looking for test storage... 00:08:33.536 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler 00:08:33.536 04:03:42 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:33.536 04:03:42 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2552858 00:08:33.536 04:03:42 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:33.536 04:03:42 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:33.536 04:03:42 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2552858 00:08:33.536 04:03:42 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2552858 ']' 00:08:33.536 04:03:42 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.536 04:03:42 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:33.536 04:03:42 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.536 04:03:42 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:33.536 04:03:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:33.536 [2024-07-23 04:03:42.223037] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:33.536 [2024-07-23 04:03:42.223163] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2552858 ] 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:33.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.795 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:33.795 [2024-07-23 04:03:42.411431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.054 [2024-07-23 04:03:42.631954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.054 [2024-07-23 04:03:42.631996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.054 [2024-07-23 04:03:42.632052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.054 [2024-07-23 04:03:42.632060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.313 04:03:43 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:34.313 04:03:43 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:08:34.313 04:03:43 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:34.313 04:03:43 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.313 04:03:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:34.313 [2024-07-23 04:03:43.094359] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:08:34.313 [2024-07-23 04:03:43.094399] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:08:34.313 [2024-07-23 04:03:43.094417] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:34.313 [2024-07-23 04:03:43.094429] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:34.313 [2024-07-23 04:03:43.094440] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:34.572 04:03:43 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.572 04:03:43 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:34.572 04:03:43 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.572 04:03:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:34.830 [2024-07-23 04:03:43.423089] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:34.830 04:03:43 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.830 04:03:43 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:34.830 04:03:43 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:34.830 04:03:43 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.830 04:03:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:34.830 ************************************ 00:08:34.830 START TEST scheduler_create_thread 00:08:34.830 ************************************ 00:08:34.830 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:08:34.830 04:03:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:34.830 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.830 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:34.830 2 00:08:34.830 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.830 04:03:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:34.830 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.830 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:34.830 3 00:08:34.830 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.830 04:03:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:34.830 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.830 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:34.830 4 00:08:34.830 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.830 04:03:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:34.830 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.830 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:34.830 5 00:08:34.830 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:34.831 6 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:34.831 7 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:34.831 8 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:34.831 9 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:34.831 10 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.831 04:03:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:36.735 04:03:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.735 04:03:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:36.735 04:03:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:36.735 04:03:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.735 04:03:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.670 04:03:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.671 00:08:37.671 real 0m2.626s 00:08:37.671 user 0m0.021s 00:08:37.671 sys 0m0.010s 00:08:37.671 04:03:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:37.671 04:03:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.671 ************************************ 00:08:37.671 END TEST scheduler_create_thread 00:08:37.671 ************************************ 00:08:37.671 04:03:46 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:08:37.671 04:03:46 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:37.671 04:03:46 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2552858 00:08:37.671 04:03:46 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2552858 ']' 00:08:37.671 04:03:46 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2552858 00:08:37.671 04:03:46 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:08:37.671 04:03:46 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:37.671 04:03:46 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2552858 00:08:37.671 04:03:46 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:08:37.671 04:03:46 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:08:37.671 04:03:46 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2552858' 00:08:37.671 killing process with pid 2552858 00:08:37.671 04:03:46 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2552858 00:08:37.671 04:03:46 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2552858 00:08:37.929 [2024-07-23 04:03:46.572055] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:39.308 00:08:39.308 real 0m5.817s 00:08:39.308 user 0m9.652s 00:08:39.308 sys 0m0.639s 00:08:39.308 04:03:47 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.308 04:03:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:39.308 ************************************ 00:08:39.308 END TEST event_scheduler 00:08:39.308 ************************************ 00:08:39.308 04:03:47 event -- common/autotest_common.sh@1142 -- # return 0 00:08:39.308 04:03:47 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:39.308 04:03:47 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:39.308 04:03:47 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:39.308 04:03:47 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.308 04:03:47 event -- common/autotest_common.sh@10 -- # set +x 00:08:39.308 ************************************ 00:08:39.308 START TEST app_repeat 00:08:39.308 ************************************ 00:08:39.308 04:03:47 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:08:39.308 04:03:47 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.308 04:03:47 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:39.308 04:03:47 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:39.308 04:03:47 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:39.308 04:03:47 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:39.308 04:03:47 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:39.308 04:03:47 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:39.308 04:03:47 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2553954 00:08:39.308 04:03:47 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:39.308 04:03:47 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:39.308 04:03:47 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2553954' 00:08:39.308 Process app_repeat pid: 2553954 00:08:39.308 04:03:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:39.308 04:03:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:39.308 spdk_app_start Round 0 00:08:39.308 04:03:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2553954 /var/tmp/spdk-nbd.sock 00:08:39.308 04:03:47 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2553954 ']' 00:08:39.308 04:03:47 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:39.308 04:03:47 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:39.308 04:03:47 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:39.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:39.308 04:03:47 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:39.308 04:03:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:39.308 [2024-07-23 04:03:48.000776] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:39.308 [2024-07-23 04:03:48.000889] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2553954 ] 00:08:39.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.567 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:39.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.567 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:39.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.567 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:39.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:39.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.568 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:39.568 [2024-07-23 04:03:48.228255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:39.827 [2024-07-23 04:03:48.500652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.827 [2024-07-23 04:03:48.500660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.394 04:03:48 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:40.394 04:03:48 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:08:40.394 04:03:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:40.653 Malloc0 00:08:40.653 04:03:49 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:40.912 Malloc1 00:08:40.912 04:03:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:40.912 04:03:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.912 04:03:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:40.912 04:03:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:40.912 04:03:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:40.912 04:03:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:40.912 04:03:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:40.912 04:03:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.912 04:03:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:40.912 04:03:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:40.912 04:03:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:40.912 04:03:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:40.912 04:03:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:40.912 04:03:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:40.912 04:03:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:40.912 04:03:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:41.171 /dev/nbd0 00:08:41.171 04:03:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:41.171 04:03:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:41.171 04:03:49 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:41.171 04:03:49 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:41.171 04:03:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:41.171 04:03:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:41.171 04:03:49 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:41.171 04:03:49 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:41.171 04:03:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:41.171 04:03:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:41.171 04:03:49 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:41.171 1+0 records in 00:08:41.171 1+0 records out 00:08:41.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259026 s, 15.8 MB/s 00:08:41.171 04:03:49 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:41.171 04:03:49 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:41.171 04:03:49 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:41.171 04:03:49 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:41.171 04:03:49 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:41.171 04:03:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:41.171 04:03:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:41.171 04:03:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:41.464 /dev/nbd1 00:08:41.464 04:03:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:41.464 04:03:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:41.464 04:03:50 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:41.464 04:03:50 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:41.464 04:03:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:41.464 04:03:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:41.464 04:03:50 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:41.464 04:03:50 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:41.464 04:03:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:41.464 04:03:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:41.464 04:03:50 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:41.464 1+0 records in 00:08:41.464 1+0 records out 00:08:41.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186541 s, 22.0 MB/s 00:08:41.464 04:03:50 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:41.464 04:03:50 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:41.464 04:03:50 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:41.464 04:03:50 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:41.464 04:03:50 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:41.464 04:03:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:41.464 04:03:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:41.465 04:03:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:41.465 04:03:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.465 04:03:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:41.723 04:03:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:41.723 { 00:08:41.723 "nbd_device": "/dev/nbd0", 00:08:41.723 "bdev_name": "Malloc0" 00:08:41.723 }, 00:08:41.723 { 00:08:41.723 "nbd_device": "/dev/nbd1", 00:08:41.724 "bdev_name": "Malloc1" 00:08:41.724 } 00:08:41.724 ]' 00:08:41.724 04:03:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:41.724 { 00:08:41.724 "nbd_device": "/dev/nbd0", 00:08:41.724 "bdev_name": "Malloc0" 00:08:41.724 }, 00:08:41.724 { 00:08:41.724 "nbd_device": "/dev/nbd1", 00:08:41.724 "bdev_name": "Malloc1" 00:08:41.724 } 00:08:41.724 ]' 00:08:41.724 04:03:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:41.724 04:03:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:41.724 /dev/nbd1' 00:08:41.724 04:03:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:41.724 /dev/nbd1' 00:08:41.724 04:03:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:41.724 04:03:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:41.724 04:03:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:41.724 04:03:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:41.724 04:03:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:41.724 04:03:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:41.724 04:03:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:41.724 04:03:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:41.724 04:03:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:41.724 04:03:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:41.724 04:03:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:41.724 04:03:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:41.724 256+0 records in 00:08:41.724 256+0 records out 00:08:41.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114964 s, 91.2 MB/s 00:08:41.724 04:03:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:41.724 04:03:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:41.983 256+0 records in 00:08:41.983 256+0 records out 00:08:41.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201703 s, 52.0 MB/s 00:08:41.983 04:03:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:41.983 04:03:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:41.983 256+0 records in 00:08:41.983 256+0 records out 00:08:41.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247863 s, 42.3 MB/s 00:08:41.983 04:03:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:41.983 04:03:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:41.983 04:03:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:41.983 04:03:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:41.983 04:03:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:41.983 04:03:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:41.983 04:03:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:41.983 04:03:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:41.983 04:03:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:41.983 04:03:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:41.983 04:03:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:41.983 04:03:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:41.983 04:03:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:41.983 04:03:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.983 04:03:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:41.983 04:03:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:41.983 04:03:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:41.983 04:03:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.983 04:03:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:42.242 04:03:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:42.242 04:03:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:42.242 04:03:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:42.242 04:03:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.242 04:03:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.242 04:03:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:42.242 04:03:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:42.242 04:03:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.242 04:03:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:42.242 04:03:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:42.242 04:03:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:42.242 04:03:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:42.242 04:03:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:42.242 04:03:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.242 04:03:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.242 04:03:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:42.242 04:03:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:42.242 04:03:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.242 04:03:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:42.242 04:03:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:42.242 04:03:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:42.501 04:03:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:42.501 04:03:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:42.501 04:03:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:42.501 04:03:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:42.501 04:03:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:42.501 04:03:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:42.501 04:03:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:42.760 04:03:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:42.760 04:03:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:42.760 04:03:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:42.760 04:03:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:42.760 04:03:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:42.760 04:03:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:43.019 04:03:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:44.922 [2024-07-23 04:03:53.676783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:45.181 [2024-07-23 04:03:53.949166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.181 [2024-07-23 04:03:53.949170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.748 [2024-07-23 04:03:54.243841] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:45.748 [2024-07-23 04:03:54.243901] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:46.007 04:03:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:46.007 04:03:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:46.007 spdk_app_start Round 1 00:08:46.007 04:03:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2553954 /var/tmp/spdk-nbd.sock 00:08:46.007 04:03:54 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2553954 ']' 00:08:46.007 04:03:54 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:46.007 04:03:54 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:46.007 04:03:54 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:46.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:46.007 04:03:54 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:46.007 04:03:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:46.266 04:03:54 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.266 04:03:54 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:08:46.266 04:03:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:46.525 Malloc0 00:08:46.525 04:03:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:46.784 Malloc1 00:08:46.784 04:03:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:46.784 04:03:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.784 04:03:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:46.784 04:03:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:46.784 04:03:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:46.784 04:03:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:46.784 04:03:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:46.784 04:03:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.784 04:03:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:46.784 04:03:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:46.784 04:03:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:46.784 04:03:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:46.784 04:03:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:46.784 04:03:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:46.784 04:03:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:46.784 04:03:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:47.043 /dev/nbd0 00:08:47.043 04:03:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:47.043 04:03:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:47.043 04:03:55 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:47.043 04:03:55 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:47.043 04:03:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:47.043 04:03:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:47.043 04:03:55 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:47.043 04:03:55 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:47.043 04:03:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:47.043 04:03:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:47.043 04:03:55 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:47.043 1+0 records in 00:08:47.043 1+0 records out 00:08:47.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232854 s, 17.6 MB/s 00:08:47.043 04:03:55 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:47.043 04:03:55 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:47.043 04:03:55 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:47.043 04:03:55 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:47.043 04:03:55 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:47.043 04:03:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.043 04:03:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:47.043 04:03:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:47.302 /dev/nbd1 00:08:47.302 04:03:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:47.302 04:03:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:47.302 04:03:55 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:47.302 04:03:55 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:47.302 04:03:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:47.302 04:03:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:47.302 04:03:55 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:47.302 04:03:55 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:47.302 04:03:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:47.302 04:03:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:47.302 04:03:55 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:47.302 1+0 records in 00:08:47.302 1+0 records out 00:08:47.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028771 s, 14.2 MB/s 00:08:47.302 04:03:55 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:47.302 04:03:55 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:47.302 04:03:55 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:47.302 04:03:55 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:47.302 04:03:55 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:47.302 04:03:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.302 04:03:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:47.302 04:03:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:47.302 04:03:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.302 04:03:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:47.613 { 00:08:47.613 "nbd_device": "/dev/nbd0", 00:08:47.613 "bdev_name": "Malloc0" 00:08:47.613 }, 00:08:47.613 { 00:08:47.613 "nbd_device": "/dev/nbd1", 00:08:47.613 "bdev_name": "Malloc1" 00:08:47.613 } 00:08:47.613 ]' 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:47.613 { 00:08:47.613 "nbd_device": "/dev/nbd0", 00:08:47.613 "bdev_name": "Malloc0" 00:08:47.613 }, 00:08:47.613 { 00:08:47.613 "nbd_device": "/dev/nbd1", 00:08:47.613 "bdev_name": "Malloc1" 00:08:47.613 } 00:08:47.613 ]' 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:47.613 /dev/nbd1' 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:47.613 /dev/nbd1' 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:47.613 256+0 records in 00:08:47.613 256+0 records out 00:08:47.613 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00878045 s, 119 MB/s 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:47.613 256+0 records in 00:08:47.613 256+0 records out 00:08:47.613 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208855 s, 50.2 MB/s 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:47.613 256+0 records in 00:08:47.613 256+0 records out 00:08:47.613 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241897 s, 43.3 MB/s 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.613 04:03:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:47.871 04:03:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:47.871 04:03:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:47.871 04:03:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:47.871 04:03:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:47.871 04:03:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:47.871 04:03:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:47.871 04:03:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:47.871 04:03:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:47.871 04:03:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.871 04:03:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:48.129 04:03:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:48.129 04:03:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:48.129 04:03:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:48.129 04:03:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.129 04:03:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.129 04:03:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:48.129 04:03:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:48.129 04:03:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.129 04:03:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:48.129 04:03:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.129 04:03:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:48.388 04:03:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:48.388 04:03:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:48.388 04:03:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:48.388 04:03:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:48.388 04:03:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:48.388 04:03:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:48.388 04:03:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:48.388 04:03:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:48.388 04:03:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:48.388 04:03:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:48.388 04:03:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:48.388 04:03:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:48.388 04:03:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:48.956 04:03:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:50.857 [2024-07-23 04:03:59.609869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:51.115 [2024-07-23 04:03:59.896101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.115 [2024-07-23 04:03:59.896102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.682 [2024-07-23 04:04:00.188914] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:51.682 [2024-07-23 04:04:00.188974] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:51.941 04:04:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:51.941 04:04:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:51.941 spdk_app_start Round 2 00:08:51.941 04:04:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2553954 /var/tmp/spdk-nbd.sock 00:08:51.941 04:04:00 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2553954 ']' 00:08:51.941 04:04:00 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:51.941 04:04:00 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:51.941 04:04:00 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:51.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:51.941 04:04:00 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:51.941 04:04:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:52.199 04:04:00 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:52.199 04:04:00 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:08:52.199 04:04:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:52.458 Malloc0 00:08:52.458 04:04:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:52.716 Malloc1 00:08:52.716 04:04:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:52.716 04:04:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.716 04:04:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:52.716 04:04:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:52.717 04:04:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:52.717 04:04:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:52.717 04:04:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:52.717 04:04:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.717 04:04:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:52.717 04:04:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:52.717 04:04:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:52.717 04:04:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:52.717 04:04:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:52.717 04:04:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:52.717 04:04:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:52.717 04:04:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:52.975 /dev/nbd0 00:08:52.975 04:04:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:52.975 04:04:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:52.975 04:04:01 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:52.975 04:04:01 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:52.975 04:04:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:52.975 04:04:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:52.975 04:04:01 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:52.975 04:04:01 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:52.975 04:04:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:52.975 04:04:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:52.975 04:04:01 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:52.975 1+0 records in 00:08:52.975 1+0 records out 00:08:52.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252249 s, 16.2 MB/s 00:08:52.975 04:04:01 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:52.975 04:04:01 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:52.975 04:04:01 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:52.975 04:04:01 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:52.975 04:04:01 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:52.975 04:04:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:52.975 04:04:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:52.975 04:04:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:53.233 /dev/nbd1 00:08:53.233 04:04:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:53.234 04:04:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:53.234 04:04:01 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:53.234 04:04:01 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:53.234 04:04:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:53.234 04:04:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:53.234 04:04:01 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:53.234 04:04:01 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:53.234 04:04:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:53.234 04:04:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:53.234 04:04:01 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:53.234 1+0 records in 00:08:53.234 1+0 records out 00:08:53.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255667 s, 16.0 MB/s 00:08:53.234 04:04:01 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:53.234 04:04:01 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:53.234 04:04:01 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:53.234 04:04:01 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:53.234 04:04:01 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:53.234 04:04:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:53.234 04:04:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:53.234 04:04:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:53.234 04:04:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.234 04:04:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:53.492 04:04:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:53.492 { 00:08:53.492 "nbd_device": "/dev/nbd0", 00:08:53.492 "bdev_name": "Malloc0" 00:08:53.492 }, 00:08:53.492 { 00:08:53.492 "nbd_device": "/dev/nbd1", 00:08:53.492 "bdev_name": "Malloc1" 00:08:53.492 } 00:08:53.492 ]' 00:08:53.492 04:04:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:53.492 { 00:08:53.492 "nbd_device": "/dev/nbd0", 00:08:53.492 "bdev_name": "Malloc0" 00:08:53.492 }, 00:08:53.492 { 00:08:53.492 "nbd_device": "/dev/nbd1", 00:08:53.492 "bdev_name": "Malloc1" 00:08:53.492 } 00:08:53.492 ]' 00:08:53.492 04:04:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:53.492 04:04:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:53.492 /dev/nbd1' 00:08:53.492 04:04:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:53.492 /dev/nbd1' 00:08:53.492 04:04:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:53.492 04:04:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:53.492 04:04:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:53.492 04:04:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:53.492 04:04:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:53.492 04:04:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:53.492 04:04:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:53.492 04:04:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:53.492 04:04:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:53.492 04:04:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:53.492 04:04:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:53.492 04:04:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:53.492 256+0 records in 00:08:53.492 256+0 records out 00:08:53.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107144 s, 97.9 MB/s 00:08:53.492 04:04:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:53.492 04:04:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:53.750 256+0 records in 00:08:53.750 256+0 records out 00:08:53.750 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206481 s, 50.8 MB/s 00:08:53.750 04:04:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:53.750 04:04:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:53.750 256+0 records in 00:08:53.750 256+0 records out 00:08:53.750 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253846 s, 41.3 MB/s 00:08:53.750 04:04:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:53.750 04:04:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:53.750 04:04:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:53.750 04:04:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:53.751 04:04:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:53.751 04:04:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:53.751 04:04:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:53.751 04:04:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:53.751 04:04:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:53.751 04:04:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:53.751 04:04:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:53.751 04:04:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:53.751 04:04:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:53.751 04:04:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.751 04:04:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:53.751 04:04:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:53.751 04:04:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:53.751 04:04:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:53.751 04:04:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:54.009 04:04:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:54.009 04:04:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:54.009 04:04:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:54.009 04:04:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:54.009 04:04:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:54.009 04:04:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:54.009 04:04:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:54.009 04:04:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:54.009 04:04:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:54.009 04:04:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:54.267 04:04:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:54.267 04:04:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:54.267 04:04:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:54.267 04:04:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:54.267 04:04:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:54.267 04:04:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:54.267 04:04:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:54.267 04:04:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:54.267 04:04:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:54.267 04:04:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:54.267 04:04:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:54.267 04:04:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:54.267 04:04:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:54.268 04:04:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:54.526 04:04:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:54.526 04:04:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:54.526 04:04:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:54.526 04:04:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:54.526 04:04:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:54.526 04:04:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:54.526 04:04:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:54.526 04:04:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:54.526 04:04:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:54.526 04:04:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:54.785 04:04:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:57.318 [2024-07-23 04:04:05.517995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:57.318 [2024-07-23 04:04:05.798637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.318 [2024-07-23 04:04:05.798639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.576 [2024-07-23 04:04:06.114517] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:57.576 [2024-07-23 04:04:06.114575] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:57.835 04:04:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2553954 /var/tmp/spdk-nbd.sock 00:08:57.835 04:04:06 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2553954 ']' 00:08:57.835 04:04:06 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:57.835 04:04:06 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:57.835 04:04:06 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:57.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:57.835 04:04:06 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:57.835 04:04:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:58.094 04:04:06 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.094 04:04:06 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:08:58.094 04:04:06 event.app_repeat -- event/event.sh@39 -- # killprocess 2553954 00:08:58.094 04:04:06 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2553954 ']' 00:08:58.094 04:04:06 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2553954 00:08:58.094 04:04:06 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:08:58.094 04:04:06 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:58.094 04:04:06 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2553954 00:08:58.094 04:04:06 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:58.094 04:04:06 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:58.094 04:04:06 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2553954' 00:08:58.094 killing process with pid 2553954 00:08:58.094 04:04:06 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2553954 00:08:58.094 04:04:06 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2553954 00:08:59.998 spdk_app_start is called in Round 0. 00:08:59.998 Shutdown signal received, stop current app iteration 00:08:59.999 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:08:59.999 spdk_app_start is called in Round 1. 00:08:59.999 Shutdown signal received, stop current app iteration 00:08:59.999 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:08:59.999 spdk_app_start is called in Round 2. 00:08:59.999 Shutdown signal received, stop current app iteration 00:08:59.999 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:08:59.999 spdk_app_start is called in Round 3. 00:08:59.999 Shutdown signal received, stop current app iteration 00:08:59.999 04:04:08 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:59.999 04:04:08 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:59.999 00:08:59.999 real 0m20.494s 00:08:59.999 user 0m40.453s 00:08:59.999 sys 0m3.664s 00:08:59.999 04:04:08 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:59.999 04:04:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:59.999 ************************************ 00:08:59.999 END TEST app_repeat 00:08:59.999 ************************************ 00:08:59.999 04:04:08 event -- common/autotest_common.sh@1142 -- # return 0 00:08:59.999 04:04:08 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:59.999 00:08:59.999 real 0m33.137s 00:08:59.999 user 0m58.884s 00:08:59.999 sys 0m5.362s 00:08:59.999 04:04:08 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:59.999 04:04:08 event -- common/autotest_common.sh@10 -- # set +x 00:08:59.999 ************************************ 00:08:59.999 END TEST event 00:08:59.999 ************************************ 00:08:59.999 04:04:08 -- common/autotest_common.sh@1142 -- # return 0 00:08:59.999 04:04:08 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:08:59.999 04:04:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:59.999 04:04:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.999 04:04:08 -- common/autotest_common.sh@10 -- # set +x 00:08:59.999 ************************************ 00:08:59.999 START TEST thread 00:08:59.999 ************************************ 00:08:59.999 04:04:08 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:08:59.999 * Looking for test storage... 00:08:59.999 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread 00:08:59.999 04:04:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:59.999 04:04:08 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:59.999 04:04:08 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.999 04:04:08 thread -- common/autotest_common.sh@10 -- # set +x 00:08:59.999 ************************************ 00:08:59.999 START TEST thread_poller_perf 00:08:59.999 ************************************ 00:08:59.999 04:04:08 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:59.999 [2024-07-23 04:04:08.734682] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:59.999 [2024-07-23 04:04:08.734787] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2557657 ] 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:00.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.257 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:00.257 [2024-07-23 04:04:08.983367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.895 [2024-07-23 04:04:09.306141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.895 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:02.272 ====================================== 00:09:02.272 busy:2517641256 (cyc) 00:09:02.272 total_run_count: 280000 00:09:02.272 tsc_hz: 2500000000 (cyc) 00:09:02.272 ====================================== 00:09:02.272 poller_cost: 8991 (cyc), 3596 (nsec) 00:09:02.272 00:09:02.272 real 0m2.174s 00:09:02.272 user 0m1.920s 00:09:02.272 sys 0m0.243s 00:09:02.272 04:04:10 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.272 04:04:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:02.272 ************************************ 00:09:02.272 END TEST thread_poller_perf 00:09:02.272 ************************************ 00:09:02.272 04:04:10 thread -- common/autotest_common.sh@1142 -- # return 0 00:09:02.272 04:04:10 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:02.272 04:04:10 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:09:02.272 04:04:10 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.272 04:04:10 thread -- common/autotest_common.sh@10 -- # set +x 00:09:02.272 ************************************ 00:09:02.272 START TEST thread_poller_perf 00:09:02.272 ************************************ 00:09:02.272 04:04:10 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:02.272 [2024-07-23 04:04:11.000341] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:02.272 [2024-07-23 04:04:11.000455] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2558173 ] 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:02.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.531 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:02.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.532 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:02.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:02.532 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:02.532 [2024-07-23 04:04:11.226513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.791 [2024-07-23 04:04:11.511496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.791 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:04.693 ====================================== 00:09:04.693 busy:2504145200 (cyc) 00:09:04.693 total_run_count: 3656000 00:09:04.693 tsc_hz: 2500000000 (cyc) 00:09:04.693 ====================================== 00:09:04.693 poller_cost: 684 (cyc), 273 (nsec) 00:09:04.693 00:09:04.693 real 0m2.110s 00:09:04.693 user 0m1.856s 00:09:04.693 sys 0m0.244s 00:09:04.693 04:04:13 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.693 04:04:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:04.693 ************************************ 00:09:04.693 END TEST thread_poller_perf 00:09:04.693 ************************************ 00:09:04.693 04:04:13 thread -- common/autotest_common.sh@1142 -- # return 0 00:09:04.693 04:04:13 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:04.693 00:09:04.693 real 0m4.559s 00:09:04.693 user 0m3.874s 00:09:04.693 sys 0m0.687s 00:09:04.693 04:04:13 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.693 04:04:13 thread -- common/autotest_common.sh@10 -- # set +x 00:09:04.693 ************************************ 00:09:04.693 END TEST thread 00:09:04.693 ************************************ 00:09:04.693 04:04:13 -- common/autotest_common.sh@1142 -- # return 0 00:09:04.693 04:04:13 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:09:04.693 04:04:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:04.693 04:04:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.693 04:04:13 -- common/autotest_common.sh@10 -- # set +x 00:09:04.693 ************************************ 00:09:04.693 START TEST accel 00:09:04.693 ************************************ 00:09:04.693 04:04:13 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:09:04.693 * Looking for test storage... 00:09:04.693 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:09:04.693 04:04:13 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:09:04.693 04:04:13 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:09:04.693 04:04:13 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:04.693 04:04:13 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2558516 00:09:04.693 04:04:13 accel -- accel/accel.sh@63 -- # waitforlisten 2558516 00:09:04.693 04:04:13 accel -- common/autotest_common.sh@829 -- # '[' -z 2558516 ']' 00:09:04.693 04:04:13 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.693 04:04:13 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.693 04:04:13 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:09:04.693 04:04:13 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.693 04:04:13 accel -- accel/accel.sh@61 -- # build_accel_config 00:09:04.693 04:04:13 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.693 04:04:13 accel -- common/autotest_common.sh@10 -- # set +x 00:09:04.693 04:04:13 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:04.693 04:04:13 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:04.693 04:04:13 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:04.693 04:04:13 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:04.693 04:04:13 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:04.693 04:04:13 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:04.693 04:04:13 accel -- accel/accel.sh@41 -- # jq -r . 00:09:04.693 [2024-07-23 04:04:13.410089] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:04.693 [2024-07-23 04:04:13.410217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2558516 ] 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:04.952 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.952 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:04.952 [2024-07-23 04:04:13.660927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.211 [2024-07-23 04:04:13.977336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.589 04:04:15 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:06.589 04:04:15 accel -- common/autotest_common.sh@862 -- # return 0 00:09:06.589 04:04:15 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:09:06.589 04:04:15 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:09:06.589 04:04:15 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:09:06.589 04:04:15 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:09:06.589 04:04:15 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:06.590 04:04:15 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:09:06.590 04:04:15 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.590 04:04:15 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:06.590 04:04:15 accel -- common/autotest_common.sh@10 -- # set +x 00:09:06.590 04:04:15 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.590 04:04:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # IFS== 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:06.590 04:04:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:06.590 04:04:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # IFS== 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:06.590 04:04:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:06.590 04:04:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # IFS== 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:06.590 04:04:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:06.590 04:04:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # IFS== 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:06.590 04:04:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:06.590 04:04:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # IFS== 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:06.590 04:04:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:06.590 04:04:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # IFS== 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:06.590 04:04:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:06.590 04:04:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # IFS== 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:06.590 04:04:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:06.590 04:04:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # IFS== 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:06.590 04:04:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:06.590 04:04:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # IFS== 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:06.590 04:04:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:06.590 04:04:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # IFS== 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:06.590 04:04:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:06.590 04:04:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # IFS== 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:06.590 04:04:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:06.590 04:04:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # IFS== 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:06.590 04:04:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:06.590 04:04:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # IFS== 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:06.590 04:04:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:06.590 04:04:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # IFS== 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:06.590 04:04:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:06.590 04:04:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # IFS== 00:09:06.590 04:04:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:06.590 04:04:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:06.590 04:04:15 accel -- accel/accel.sh@75 -- # killprocess 2558516 00:09:06.590 04:04:15 accel -- common/autotest_common.sh@948 -- # '[' -z 2558516 ']' 00:09:06.590 04:04:15 accel -- common/autotest_common.sh@952 -- # kill -0 2558516 00:09:06.590 04:04:15 accel -- common/autotest_common.sh@953 -- # uname 00:09:06.590 04:04:15 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:06.590 04:04:15 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2558516 00:09:06.590 04:04:15 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:06.590 04:04:15 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:06.590 04:04:15 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2558516' 00:09:06.590 killing process with pid 2558516 00:09:06.590 04:04:15 accel -- common/autotest_common.sh@967 -- # kill 2558516 00:09:06.590 04:04:15 accel -- common/autotest_common.sh@972 -- # wait 2558516 00:09:09.878 04:04:18 accel -- accel/accel.sh@76 -- # trap - ERR 00:09:09.878 04:04:18 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:09:09.878 04:04:18 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:09.878 04:04:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.878 04:04:18 accel -- common/autotest_common.sh@10 -- # set +x 00:09:09.878 04:04:18 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:09:09.878 04:04:18 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:09:09.878 04:04:18 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:09:09.878 04:04:18 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:09.878 04:04:18 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:09.878 04:04:18 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:09.878 04:04:18 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:09.878 04:04:18 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:09.878 04:04:18 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:09:09.878 04:04:18 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:09:10.135 04:04:18 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:10.135 04:04:18 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:09:10.135 04:04:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:10.135 04:04:18 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:09:10.135 04:04:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:10.135 04:04:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.135 04:04:18 accel -- common/autotest_common.sh@10 -- # set +x 00:09:10.135 ************************************ 00:09:10.135 START TEST accel_missing_filename 00:09:10.135 ************************************ 00:09:10.135 04:04:18 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:09:10.135 04:04:18 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:09:10.135 04:04:18 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:09:10.135 04:04:18 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:10.135 04:04:18 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.135 04:04:18 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:10.135 04:04:18 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:10.135 04:04:18 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:09:10.135 04:04:18 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:09:10.135 04:04:18 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:09:10.135 04:04:18 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:10.135 04:04:18 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:10.135 04:04:18 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:10.135 04:04:18 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:10.135 04:04:18 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:10.135 04:04:18 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:09:10.135 04:04:18 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:09:10.135 [2024-07-23 04:04:18.819271] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:10.135 [2024-07-23 04:04:18.819383] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2559584 ] 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:10.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.393 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:10.394 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.394 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:10.394 [2024-07-23 04:04:19.033206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.652 [2024-07-23 04:04:19.293222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.910 [2024-07-23 04:04:19.628190] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:11.845 [2024-07-23 04:04:20.395904] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:09:12.412 A filename is required. 00:09:12.412 04:04:20 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:09:12.412 04:04:20 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:12.412 04:04:20 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:09:12.412 04:04:20 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:09:12.412 04:04:20 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:09:12.412 04:04:20 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:12.412 00:09:12.412 real 0m2.202s 00:09:12.412 user 0m1.938s 00:09:12.412 sys 0m0.286s 00:09:12.412 04:04:20 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:12.412 04:04:20 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:09:12.412 ************************************ 00:09:12.412 END TEST accel_missing_filename 00:09:12.412 ************************************ 00:09:12.412 04:04:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:12.412 04:04:20 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:09:12.412 04:04:20 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:09:12.412 04:04:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.412 04:04:20 accel -- common/autotest_common.sh@10 -- # set +x 00:09:12.412 ************************************ 00:09:12.412 START TEST accel_compress_verify 00:09:12.412 ************************************ 00:09:12.412 04:04:21 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:09:12.412 04:04:21 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:09:12.412 04:04:21 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:09:12.412 04:04:21 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:12.412 04:04:21 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:12.412 04:04:21 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:12.412 04:04:21 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:12.412 04:04:21 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:09:12.412 04:04:21 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:09:12.412 04:04:21 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:09:12.412 04:04:21 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:12.412 04:04:21 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:12.412 04:04:21 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:12.412 04:04:21 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:12.412 04:04:21 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:12.412 04:04:21 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:09:12.412 04:04:21 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:09:12.412 [2024-07-23 04:04:21.107789] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:12.412 [2024-07-23 04:04:21.107895] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2559895 ] 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:12.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:12.671 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:12.671 [2024-07-23 04:04:21.336451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.929 [2024-07-23 04:04:21.608654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.188 [2024-07-23 04:04:21.947835] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:14.123 [2024-07-23 04:04:22.716001] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:09:14.691 00:09:14.691 Compression does not support the verify option, aborting. 00:09:14.691 04:04:23 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:09:14.691 04:04:23 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:14.691 04:04:23 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:09:14.691 04:04:23 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:09:14.691 04:04:23 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:09:14.691 04:04:23 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:14.691 00:09:14.691 real 0m2.258s 00:09:14.691 user 0m1.963s 00:09:14.691 sys 0m0.318s 00:09:14.691 04:04:23 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:14.691 04:04:23 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:09:14.691 ************************************ 00:09:14.691 END TEST accel_compress_verify 00:09:14.691 ************************************ 00:09:14.691 04:04:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:14.691 04:04:23 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:09:14.691 04:04:23 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:14.692 04:04:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.692 04:04:23 accel -- common/autotest_common.sh@10 -- # set +x 00:09:14.692 ************************************ 00:09:14.692 START TEST accel_wrong_workload 00:09:14.692 ************************************ 00:09:14.692 04:04:23 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:09:14.692 04:04:23 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:09:14.692 04:04:23 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:09:14.692 04:04:23 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:14.692 04:04:23 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.692 04:04:23 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:14.692 04:04:23 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.692 04:04:23 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:09:14.692 04:04:23 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:09:14.692 04:04:23 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:09:14.692 04:04:23 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:14.692 04:04:23 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:14.692 04:04:23 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:14.692 04:04:23 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:14.692 04:04:23 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:14.692 04:04:23 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:09:14.692 04:04:23 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:09:14.692 Unsupported workload type: foobar 00:09:14.692 [2024-07-23 04:04:23.440236] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:09:14.692 accel_perf options: 00:09:14.692 [-h help message] 00:09:14.692 [-q queue depth per core] 00:09:14.692 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:14.692 [-T number of threads per core 00:09:14.692 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:14.692 [-t time in seconds] 00:09:14.692 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:14.692 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:09:14.692 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:14.692 [-l for compress/decompress workloads, name of uncompressed input file 00:09:14.692 [-S for crc32c workload, use this seed value (default 0) 00:09:14.692 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:14.692 [-f for fill workload, use this BYTE value (default 255) 00:09:14.692 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:14.692 [-y verify result if this switch is on] 00:09:14.692 [-a tasks to allocate per core (default: same value as -q)] 00:09:14.692 Can be used to spread operations across a wider range of memory. 00:09:14.692 04:04:23 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:09:14.692 04:04:23 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:14.692 04:04:23 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:14.692 04:04:23 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:14.692 00:09:14.692 real 0m0.087s 00:09:14.692 user 0m0.074s 00:09:14.692 sys 0m0.052s 00:09:14.692 04:04:23 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:14.692 04:04:23 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:09:14.692 ************************************ 00:09:14.692 END TEST accel_wrong_workload 00:09:14.692 ************************************ 00:09:14.950 04:04:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:14.950 04:04:23 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:09:14.950 04:04:23 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:09:14.950 04:04:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.950 04:04:23 accel -- common/autotest_common.sh@10 -- # set +x 00:09:14.950 ************************************ 00:09:14.950 START TEST accel_negative_buffers 00:09:14.950 ************************************ 00:09:14.950 04:04:23 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:09:14.950 04:04:23 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:09:14.950 04:04:23 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:09:14.950 04:04:23 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:14.950 04:04:23 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.950 04:04:23 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:14.950 04:04:23 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.950 04:04:23 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:09:14.950 04:04:23 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:09:14.950 04:04:23 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:09:14.950 04:04:23 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:14.950 04:04:23 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:14.950 04:04:23 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:14.950 04:04:23 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:14.950 04:04:23 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:14.950 04:04:23 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:09:14.950 04:04:23 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:09:14.950 -x option must be non-negative. 00:09:14.950 [2024-07-23 04:04:23.615394] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:09:14.950 accel_perf options: 00:09:14.950 [-h help message] 00:09:14.950 [-q queue depth per core] 00:09:14.950 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:14.950 [-T number of threads per core 00:09:14.950 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:14.950 [-t time in seconds] 00:09:14.950 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:14.950 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:09:14.950 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:14.950 [-l for compress/decompress workloads, name of uncompressed input file 00:09:14.950 [-S for crc32c workload, use this seed value (default 0) 00:09:14.950 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:14.950 [-f for fill workload, use this BYTE value (default 255) 00:09:14.950 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:14.950 [-y verify result if this switch is on] 00:09:14.950 [-a tasks to allocate per core (default: same value as -q)] 00:09:14.950 Can be used to spread operations across a wider range of memory. 00:09:14.950 04:04:23 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:09:14.950 04:04:23 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:14.950 04:04:23 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:14.950 04:04:23 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:14.950 00:09:14.950 real 0m0.087s 00:09:14.950 user 0m0.076s 00:09:14.950 sys 0m0.051s 00:09:14.950 04:04:23 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:14.950 04:04:23 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:09:14.950 ************************************ 00:09:14.950 END TEST accel_negative_buffers 00:09:14.950 ************************************ 00:09:14.950 04:04:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:14.950 04:04:23 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:09:14.950 04:04:23 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:09:14.950 04:04:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.951 04:04:23 accel -- common/autotest_common.sh@10 -- # set +x 00:09:14.951 ************************************ 00:09:14.951 START TEST accel_crc32c 00:09:14.951 ************************************ 00:09:14.951 04:04:23 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:09:14.951 04:04:23 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:09:14.951 04:04:23 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:09:14.951 04:04:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:14.951 04:04:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:14.951 04:04:23 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:14.951 04:04:23 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:14.951 04:04:23 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:09:14.951 04:04:23 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:14.951 04:04:23 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:14.951 04:04:23 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:14.951 04:04:23 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:14.951 04:04:23 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:14.951 04:04:23 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:09:14.951 04:04:23 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:09:15.210 [2024-07-23 04:04:23.774675] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:15.210 [2024-07-23 04:04:23.774777] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2560461 ] 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:15.210 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.210 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:15.469 [2024-07-23 04:04:24.000275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.728 [2024-07-23 04:04:24.267414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.986 04:04:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:15.986 04:04:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:15.986 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:15.986 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:15.986 04:04:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:15.986 04:04:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:15.986 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:15.986 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:15.986 04:04:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:09:15.986 04:04:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:15.986 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:15.986 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:15.986 04:04:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:15.986 04:04:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:15.986 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:15.987 04:04:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:09:18.521 04:04:26 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:18.521 00:09:18.521 real 0m3.134s 00:09:18.521 user 0m0.007s 00:09:18.521 sys 0m0.005s 00:09:18.521 04:04:26 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.521 04:04:26 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:09:18.521 ************************************ 00:09:18.521 END TEST accel_crc32c 00:09:18.522 ************************************ 00:09:18.522 04:04:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:18.522 04:04:26 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:09:18.522 04:04:26 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:09:18.522 04:04:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.522 04:04:26 accel -- common/autotest_common.sh@10 -- # set +x 00:09:18.522 ************************************ 00:09:18.522 START TEST accel_crc32c_C2 00:09:18.522 ************************************ 00:09:18.522 04:04:26 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:09:18.522 04:04:26 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:09:18.522 04:04:26 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:09:18.522 04:04:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.522 04:04:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.522 04:04:26 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:18.522 04:04:26 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:18.522 04:04:26 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:09:18.522 04:04:26 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:18.522 04:04:26 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:18.522 04:04:26 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:18.522 04:04:26 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:18.522 04:04:26 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:18.522 04:04:26 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:09:18.522 04:04:26 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:09:18.522 [2024-07-23 04:04:27.000335] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:18.522 [2024-07-23 04:04:27.000443] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2561007 ] 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:18.522 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.522 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:18.522 [2024-07-23 04:04:27.225458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.781 [2024-07-23 04:04:27.504466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:19.349 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:19.350 04:04:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:21.885 00:09:21.885 real 0m3.155s 00:09:21.885 user 0m0.011s 00:09:21.885 sys 0m0.001s 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.885 04:04:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:09:21.885 ************************************ 00:09:21.885 END TEST accel_crc32c_C2 00:09:21.885 ************************************ 00:09:21.885 04:04:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:21.885 04:04:30 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:09:21.885 04:04:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:21.885 04:04:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.885 04:04:30 accel -- common/autotest_common.sh@10 -- # set +x 00:09:21.885 ************************************ 00:09:21.885 START TEST accel_copy 00:09:21.885 ************************************ 00:09:21.885 04:04:30 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:09:21.885 04:04:30 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:09:21.885 04:04:30 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:09:21.885 04:04:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:21.885 04:04:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:21.885 04:04:30 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:09:21.885 04:04:30 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:21.885 04:04:30 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:09:21.885 04:04:30 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:21.885 04:04:30 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:21.885 04:04:30 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:21.885 04:04:30 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:21.885 04:04:30 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:21.885 04:04:30 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:09:21.885 04:04:30 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:09:21.885 [2024-07-23 04:04:30.236208] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:21.885 [2024-07-23 04:04:30.236311] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2561558 ] 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.885 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:21.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.886 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:21.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.886 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:21.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.886 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:21.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.886 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:21.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.886 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:21.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.886 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:21.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.886 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:21.886 [2024-07-23 04:04:30.461715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.177 [2024-07-23 04:04:30.747669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:22.443 04:04:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:09:24.978 04:04:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:24.978 00:09:24.978 real 0m3.208s 00:09:24.978 user 0m0.010s 00:09:24.978 sys 0m0.002s 00:09:24.978 04:04:33 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:24.978 04:04:33 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:09:24.978 ************************************ 00:09:24.978 END TEST accel_copy 00:09:24.978 ************************************ 00:09:24.978 04:04:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:24.978 04:04:33 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:24.978 04:04:33 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:24.978 04:04:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.978 04:04:33 accel -- common/autotest_common.sh@10 -- # set +x 00:09:24.978 ************************************ 00:09:24.978 START TEST accel_fill 00:09:24.978 ************************************ 00:09:24.978 04:04:33 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:24.978 04:04:33 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:09:24.978 04:04:33 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:09:24.978 04:04:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:24.978 04:04:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:24.978 04:04:33 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:24.978 04:04:33 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:24.978 04:04:33 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:09:24.978 04:04:33 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:24.978 04:04:33 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:24.978 04:04:33 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:24.978 04:04:33 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:24.978 04:04:33 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:24.978 04:04:33 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:09:24.978 04:04:33 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:09:24.978 [2024-07-23 04:04:33.511804] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:24.978 [2024-07-23 04:04:33.511905] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2562131 ] 00:09:24.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.978 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:24.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.978 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:24.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.978 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:24.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.978 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:24.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.978 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:24.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.978 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:24.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.978 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:24.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.978 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:24.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.978 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:24.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.978 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:24.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.978 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:24.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.978 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:24.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.979 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:24.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.979 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:24.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.979 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:24.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.979 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:24.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.979 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:24.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.979 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:24.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.979 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:24.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.979 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:24.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.979 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:24.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.979 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:24.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.979 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:24.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.979 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:24.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.979 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:24.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.979 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:24.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.979 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:24.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.979 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:24.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.979 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:24.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.979 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:24.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.979 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:24.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:24.979 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:24.979 [2024-07-23 04:04:33.735638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.238 [2024-07-23 04:04:34.014932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:09:25.805 04:04:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:25.806 04:04:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:28.339 04:04:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:28.339 04:04:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:09:28.340 04:04:36 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:28.340 00:09:28.340 real 0m3.281s 00:09:28.340 user 0m0.010s 00:09:28.340 sys 0m0.001s 00:09:28.340 04:04:36 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.340 04:04:36 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:09:28.340 ************************************ 00:09:28.340 END TEST accel_fill 00:09:28.340 ************************************ 00:09:28.340 04:04:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:28.340 04:04:36 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:09:28.340 04:04:36 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:28.340 04:04:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.340 04:04:36 accel -- common/autotest_common.sh@10 -- # set +x 00:09:28.340 ************************************ 00:09:28.340 START TEST accel_copy_crc32c 00:09:28.340 ************************************ 00:09:28.340 04:04:36 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:09:28.340 04:04:36 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:09:28.340 04:04:36 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:09:28.340 04:04:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:28.340 04:04:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:28.340 04:04:36 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:09:28.340 04:04:36 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:09:28.340 04:04:36 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:09:28.340 04:04:36 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:28.340 04:04:36 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:28.340 04:04:36 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:28.340 04:04:36 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:28.340 04:04:36 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:28.340 04:04:36 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:09:28.340 04:04:36 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:09:28.340 [2024-07-23 04:04:36.854209] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:28.340 [2024-07-23 04:04:36.854316] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2562793 ] 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:28.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.340 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:28.340 [2024-07-23 04:04:37.081564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.599 [2024-07-23 04:04:37.363076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:29.167 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:29.168 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:29.168 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:09:29.168 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:29.168 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:29.168 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:29.168 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:29.168 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:29.168 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:29.168 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:29.168 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:29.168 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:29.168 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:29.168 04:04:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:31.702 00:09:31.702 real 0m3.185s 00:09:31.702 user 0m0.012s 00:09:31.702 sys 0m0.000s 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.702 04:04:39 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:09:31.702 ************************************ 00:09:31.702 END TEST accel_copy_crc32c 00:09:31.702 ************************************ 00:09:31.702 04:04:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:31.702 04:04:40 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:09:31.702 04:04:40 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:09:31.702 04:04:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.702 04:04:40 accel -- common/autotest_common.sh@10 -- # set +x 00:09:31.702 ************************************ 00:09:31.702 START TEST accel_copy_crc32c_C2 00:09:31.702 ************************************ 00:09:31.702 04:04:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:09:31.702 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:09:31.702 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:09:31.702 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:31.702 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:31.702 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:09:31.702 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:09:31.702 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:09:31.702 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:31.702 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:31.702 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:31.702 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:31.702 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:31.702 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:09:31.702 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:09:31.702 [2024-07-23 04:04:40.118967] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:31.702 [2024-07-23 04:04:40.119068] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2563411 ] 00:09:31.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.702 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:31.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.702 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:31.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.702 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:31.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.702 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:31.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.702 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:31.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.702 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:31.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.702 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:31.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.702 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:31.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.702 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:31.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.702 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:31.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.702 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:31.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.702 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:31.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.702 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:31.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.702 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:31.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.702 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:31.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.702 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:31.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.702 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:31.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.702 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:31.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.702 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:31.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.702 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:31.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.702 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:31.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.702 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:31.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.703 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:31.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.703 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:31.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.703 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:31.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.703 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:31.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.703 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:31.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.703 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:31.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.703 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:31.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.703 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:31.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.703 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:31.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:31.703 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:31.703 [2024-07-23 04:04:40.341231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.961 [2024-07-23 04:04:40.599790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:32.220 04:04:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:34.748 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:34.748 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:34.748 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:34.748 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:34.748 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:34.748 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:34.748 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:34.748 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:34.748 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:34.748 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:34.749 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:34.749 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:34.749 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:34.749 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:34.749 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:34.749 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:34.749 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:34.749 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:34.749 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:34.749 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:34.749 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:34.749 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:34.749 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:34.749 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:34.749 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:34.749 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:09:34.749 04:04:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:34.749 00:09:34.749 real 0m3.174s 00:09:34.749 user 0m0.009s 00:09:34.749 sys 0m0.003s 00:09:34.749 04:04:43 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:34.749 04:04:43 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:09:34.749 ************************************ 00:09:34.749 END TEST accel_copy_crc32c_C2 00:09:34.749 ************************************ 00:09:34.749 04:04:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:34.749 04:04:43 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:09:34.749 04:04:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:34.749 04:04:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:34.749 04:04:43 accel -- common/autotest_common.sh@10 -- # set +x 00:09:34.749 ************************************ 00:09:34.749 START TEST accel_dualcast 00:09:34.749 ************************************ 00:09:34.749 04:04:43 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:09:34.749 04:04:43 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:09:34.749 04:04:43 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:09:34.749 04:04:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:34.749 04:04:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:34.749 04:04:43 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:09:34.749 04:04:43 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:09:34.749 04:04:43 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:09:34.749 04:04:43 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:34.749 04:04:43 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:34.749 04:04:43 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:34.749 04:04:43 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:34.749 04:04:43 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:34.749 04:04:43 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:09:34.749 04:04:43 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:09:34.749 [2024-07-23 04:04:43.392888] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:34.749 [2024-07-23 04:04:43.393120] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2563993 ] 00:09:35.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.006 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:35.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.006 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:35.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.006 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:35.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.006 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:35.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.006 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:35.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.006 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:35.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:35.007 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:35.007 [2024-07-23 04:04:43.764010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.265 [2024-07-23 04:04:44.041793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:35.833 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:35.834 04:04:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:09:35.834 04:04:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:35.834 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:35.834 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:35.834 04:04:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:09:35.834 04:04:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:35.834 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:35.834 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:35.834 04:04:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:35.834 04:04:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:35.834 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:35.834 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:35.834 04:04:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:35.834 04:04:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:35.834 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:35.834 04:04:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:09:38.369 04:04:46 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:38.369 00:09:38.369 real 0m3.378s 00:09:38.369 user 0m0.011s 00:09:38.369 sys 0m0.000s 00:09:38.369 04:04:46 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:38.369 04:04:46 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:09:38.369 ************************************ 00:09:38.369 END TEST accel_dualcast 00:09:38.369 ************************************ 00:09:38.369 04:04:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:38.369 04:04:46 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:09:38.369 04:04:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:38.369 04:04:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.369 04:04:46 accel -- common/autotest_common.sh@10 -- # set +x 00:09:38.369 ************************************ 00:09:38.369 START TEST accel_compare 00:09:38.369 ************************************ 00:09:38.369 04:04:46 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:09:38.369 04:04:46 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:09:38.369 04:04:46 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:09:38.369 04:04:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:38.369 04:04:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:38.369 04:04:46 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:09:38.369 04:04:46 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:09:38.369 04:04:46 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:09:38.369 04:04:46 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:38.369 04:04:46 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:38.369 04:04:46 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:38.369 04:04:46 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:38.369 04:04:46 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:38.369 04:04:46 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:09:38.369 04:04:46 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:09:38.369 [2024-07-23 04:04:46.783317] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:38.369 [2024-07-23 04:04:46.783415] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2564545 ] 00:09:38.369 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.369 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:38.369 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.369 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:38.369 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.369 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:38.369 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.369 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:38.369 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.369 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:38.369 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.369 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:38.369 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.369 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:38.370 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.370 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:38.370 [2024-07-23 04:04:47.006369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.629 [2024-07-23 04:04:47.269904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.887 04:04:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:38.887 04:04:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:38.887 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:38.888 04:04:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:41.421 04:04:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:41.422 04:04:49 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:41.422 04:04:49 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:09:41.422 04:04:49 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:41.422 00:09:41.422 real 0m3.122s 00:09:41.422 user 0m0.009s 00:09:41.422 sys 0m0.001s 00:09:41.422 04:04:49 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:41.422 04:04:49 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:09:41.422 ************************************ 00:09:41.422 END TEST accel_compare 00:09:41.422 ************************************ 00:09:41.422 04:04:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:41.422 04:04:49 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:09:41.422 04:04:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:41.422 04:04:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:41.422 04:04:49 accel -- common/autotest_common.sh@10 -- # set +x 00:09:41.422 ************************************ 00:09:41.422 START TEST accel_xor 00:09:41.422 ************************************ 00:09:41.422 04:04:49 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:09:41.422 04:04:49 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:09:41.422 04:04:49 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:09:41.422 04:04:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:41.422 04:04:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:41.422 04:04:49 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:09:41.422 04:04:49 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:09:41.422 04:04:49 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:09:41.422 04:04:49 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:41.422 04:04:49 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:41.422 04:04:49 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:41.422 04:04:49 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:41.422 04:04:49 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:41.422 04:04:49 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:09:41.422 04:04:49 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:09:41.422 [2024-07-23 04:04:49.980478] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:41.422 [2024-07-23 04:04:49.980580] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2565092 ] 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:41.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.422 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:41.681 [2024-07-23 04:04:50.205501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.940 [2024-07-23 04:04:50.489696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:42.199 04:04:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:44.767 00:09:44.767 real 0m3.288s 00:09:44.767 user 0m0.010s 00:09:44.767 sys 0m0.002s 00:09:44.767 04:04:53 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:44.767 04:04:53 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:09:44.767 ************************************ 00:09:44.767 END TEST accel_xor 00:09:44.767 ************************************ 00:09:44.767 04:04:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:44.767 04:04:53 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:09:44.767 04:04:53 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:09:44.767 04:04:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.767 04:04:53 accel -- common/autotest_common.sh@10 -- # set +x 00:09:44.767 ************************************ 00:09:44.767 START TEST accel_xor 00:09:44.767 ************************************ 00:09:44.767 04:04:53 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:09:44.767 04:04:53 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:09:44.767 [2024-07-23 04:04:53.335904] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:44.767 [2024-07-23 04:04:53.336005] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2565646 ] 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:44.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.767 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:45.026 [2024-07-23 04:04:53.556308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.284 [2024-07-23 04:04:53.836983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:45.542 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:45.543 04:04:54 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:09:45.543 04:04:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:45.543 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:45.543 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:45.543 04:04:54 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:09:45.543 04:04:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:45.543 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:45.543 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:45.543 04:04:54 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:09:45.543 04:04:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:45.543 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:45.543 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:45.543 04:04:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:45.543 04:04:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:45.543 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:45.543 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:45.543 04:04:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:45.543 04:04:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:45.543 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:45.543 04:04:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:48.074 04:04:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:48.074 04:04:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:48.074 04:04:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:48.074 04:04:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:48.074 04:04:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:48.074 04:04:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:48.074 04:04:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:48.074 04:04:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:48.074 04:04:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:48.074 04:04:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:48.074 04:04:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:48.074 04:04:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:48.074 04:04:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:48.075 04:04:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:48.075 04:04:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:48.075 04:04:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:48.075 04:04:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:48.075 04:04:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:48.075 04:04:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:48.075 04:04:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:48.075 04:04:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:48.075 04:04:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:48.075 04:04:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:48.075 04:04:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:48.075 04:04:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:48.075 04:04:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:09:48.075 04:04:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:48.075 00:09:48.075 real 0m3.288s 00:09:48.075 user 0m0.010s 00:09:48.075 sys 0m0.001s 00:09:48.075 04:04:56 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:48.075 04:04:56 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:09:48.075 ************************************ 00:09:48.075 END TEST accel_xor 00:09:48.075 ************************************ 00:09:48.075 04:04:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:48.075 04:04:56 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:09:48.075 04:04:56 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:48.075 04:04:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.075 04:04:56 accel -- common/autotest_common.sh@10 -- # set +x 00:09:48.075 ************************************ 00:09:48.075 START TEST accel_dif_verify 00:09:48.075 ************************************ 00:09:48.075 04:04:56 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:09:48.075 04:04:56 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:09:48.075 04:04:56 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:09:48.075 04:04:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:48.075 04:04:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:48.075 04:04:56 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:09:48.075 04:04:56 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:48.075 04:04:56 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:09:48.075 04:04:56 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:48.075 04:04:56 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:48.075 04:04:56 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:48.075 04:04:56 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:48.075 04:04:56 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:48.075 04:04:56 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:09:48.075 04:04:56 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:09:48.075 [2024-07-23 04:04:56.691343] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:48.075 [2024-07-23 04:04:56.691442] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2566198 ] 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:48.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:48.075 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:48.334 [2024-07-23 04:04:56.914430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.593 [2024-07-23 04:04:57.199082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:48.852 04:04:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:09:51.384 04:04:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:51.384 00:09:51.384 real 0m3.210s 00:09:51.384 user 0m0.012s 00:09:51.384 sys 0m0.000s 00:09:51.384 04:04:59 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:51.384 04:04:59 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:09:51.384 ************************************ 00:09:51.384 END TEST accel_dif_verify 00:09:51.384 ************************************ 00:09:51.384 04:04:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:51.384 04:04:59 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:09:51.384 04:04:59 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:51.384 04:04:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.384 04:04:59 accel -- common/autotest_common.sh@10 -- # set +x 00:09:51.384 ************************************ 00:09:51.384 START TEST accel_dif_generate 00:09:51.384 ************************************ 00:09:51.384 04:04:59 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:09:51.384 04:04:59 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:09:51.384 04:04:59 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:09:51.384 04:04:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:51.384 04:04:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:51.384 04:04:59 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:09:51.384 04:04:59 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:09:51.384 04:04:59 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:09:51.384 04:04:59 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:51.384 04:04:59 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:51.384 04:04:59 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:51.384 04:04:59 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:51.384 04:04:59 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:51.384 04:04:59 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:09:51.384 04:04:59 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:09:51.384 [2024-07-23 04:04:59.968614] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:51.384 [2024-07-23 04:04:59.968714] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2566759 ] 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:51.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.384 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:51.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.385 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:51.385 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.385 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:51.643 [2024-07-23 04:05:00.191942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.901 [2024-07-23 04:05:00.470949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:52.160 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:52.161 04:05:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:09:54.691 04:05:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:54.691 00:09:54.691 real 0m3.194s 00:09:54.691 user 0m0.011s 00:09:54.691 sys 0m0.001s 00:09:54.691 04:05:03 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:54.692 04:05:03 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:09:54.692 ************************************ 00:09:54.692 END TEST accel_dif_generate 00:09:54.692 ************************************ 00:09:54.692 04:05:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:54.692 04:05:03 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:09:54.692 04:05:03 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:54.692 04:05:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.692 04:05:03 accel -- common/autotest_common.sh@10 -- # set +x 00:09:54.692 ************************************ 00:09:54.692 START TEST accel_dif_generate_copy 00:09:54.692 ************************************ 00:09:54.692 04:05:03 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:09:54.692 04:05:03 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:09:54.692 04:05:03 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:09:54.692 04:05:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.692 04:05:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.692 04:05:03 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:09:54.692 04:05:03 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:09:54.692 04:05:03 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:09:54.692 04:05:03 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:54.692 04:05:03 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:54.692 04:05:03 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:54.692 04:05:03 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:54.692 04:05:03 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:54.692 04:05:03 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:09:54.692 04:05:03 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:09:54.692 [2024-07-23 04:05:03.237459] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:54.692 [2024-07-23 04:05:03.237564] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2567420 ] 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:54.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.692 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:54.692 [2024-07-23 04:05:03.464559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.259 [2024-07-23 04:05:03.741705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:55.518 04:05:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:58.054 00:09:58.054 real 0m3.250s 00:09:58.054 user 0m0.011s 00:09:58.054 sys 0m0.000s 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:58.054 04:05:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:09:58.054 ************************************ 00:09:58.054 END TEST accel_dif_generate_copy 00:09:58.054 ************************************ 00:09:58.054 04:05:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:58.054 04:05:06 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:09:58.054 04:05:06 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:58.054 04:05:06 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:09:58.054 04:05:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.054 04:05:06 accel -- common/autotest_common.sh@10 -- # set +x 00:09:58.054 ************************************ 00:09:58.054 START TEST accel_comp 00:09:58.054 ************************************ 00:09:58.054 04:05:06 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:58.054 04:05:06 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:09:58.054 04:05:06 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:09:58.054 04:05:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:58.054 04:05:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:58.054 04:05:06 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:58.054 04:05:06 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:58.054 04:05:06 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:09:58.054 04:05:06 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:58.054 04:05:06 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:58.054 04:05:06 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:58.054 04:05:06 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:58.054 04:05:06 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:58.054 04:05:06 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:09:58.054 04:05:06 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:09:58.054 [2024-07-23 04:05:06.560303] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:58.054 [2024-07-23 04:05:06.560424] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2568080 ] 00:09:58.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.054 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:58.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.054 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:58.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.054 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:58.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.054 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:58.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.054 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:58.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.054 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:58.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.054 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:58.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.054 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:58.054 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.054 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:58.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.055 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:58.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.055 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:58.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.055 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:58.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.055 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:58.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.055 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:58.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.055 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:58.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.055 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:58.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.055 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:58.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.055 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:58.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.055 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:58.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.055 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:58.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.055 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:58.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.055 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:58.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.055 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:58.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.055 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:58.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.055 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:58.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.055 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:58.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.055 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:58.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.055 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:58.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.055 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:58.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.055 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:58.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.055 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:58.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:58.055 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:58.055 [2024-07-23 04:05:06.786023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.321 [2024-07-23 04:05:07.059480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:58.890 04:05:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:10:01.425 04:05:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:01.425 00:10:01.425 real 0m3.263s 00:10:01.425 user 0m0.009s 00:10:01.425 sys 0m0.003s 00:10:01.425 04:05:09 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:01.425 04:05:09 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:10:01.425 ************************************ 00:10:01.425 END TEST accel_comp 00:10:01.425 ************************************ 00:10:01.425 04:05:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:01.425 04:05:09 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:01.425 04:05:09 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:01.425 04:05:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.425 04:05:09 accel -- common/autotest_common.sh@10 -- # set +x 00:10:01.425 ************************************ 00:10:01.425 START TEST accel_decomp 00:10:01.425 ************************************ 00:10:01.425 04:05:09 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:01.425 04:05:09 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:10:01.425 04:05:09 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:10:01.425 04:05:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:01.425 04:05:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:01.425 04:05:09 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:01.425 04:05:09 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:01.425 04:05:09 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:10:01.425 04:05:09 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:01.425 04:05:09 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:01.425 04:05:09 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:01.425 04:05:09 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:01.425 04:05:09 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:01.425 04:05:09 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:10:01.425 04:05:09 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:10:01.425 [2024-07-23 04:05:09.938048] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:01.425 [2024-07-23 04:05:09.938290] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2568626 ] 00:10:01.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.684 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:01.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.684 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:01.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.684 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:01.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.684 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:01.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:01.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:01.685 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:01.685 [2024-07-23 04:05:10.311114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.944 [2024-07-23 04:05:10.595174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:02.203 04:05:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:04.738 04:05:13 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:04.738 00:10:04.738 real 0m3.400s 00:10:04.738 user 0m0.011s 00:10:04.738 sys 0m0.001s 00:10:04.738 04:05:13 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.738 04:05:13 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:10:04.738 ************************************ 00:10:04.738 END TEST accel_decomp 00:10:04.738 ************************************ 00:10:04.738 04:05:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:04.738 04:05:13 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:04.738 04:05:13 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:10:04.738 04:05:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.738 04:05:13 accel -- common/autotest_common.sh@10 -- # set +x 00:10:04.738 ************************************ 00:10:04.738 START TEST accel_decomp_full 00:10:04.738 ************************************ 00:10:04.738 04:05:13 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:04.738 04:05:13 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:10:04.738 04:05:13 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:10:04.738 04:05:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:04.738 04:05:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:04.738 04:05:13 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:04.738 04:05:13 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:04.738 04:05:13 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:10:04.738 04:05:13 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:04.738 04:05:13 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:04.738 04:05:13 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:04.738 04:05:13 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:04.739 04:05:13 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:04.739 04:05:13 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:10:04.739 04:05:13 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:10:04.739 [2024-07-23 04:05:13.360013] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:04.739 [2024-07-23 04:05:13.360116] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2569181 ] 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:04.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:04.739 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:05.053 [2024-07-23 04:05:13.583172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.316 [2024-07-23 04:05:13.866428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:05.575 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:05.576 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:05.576 04:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:05.576 04:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:05.576 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:05.576 04:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:08.110 04:05:16 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:08.110 00:10:08.110 real 0m3.179s 00:10:08.110 user 0m0.011s 00:10:08.110 sys 0m0.001s 00:10:08.110 04:05:16 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:08.110 04:05:16 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:10:08.110 ************************************ 00:10:08.110 END TEST accel_decomp_full 00:10:08.110 ************************************ 00:10:08.110 04:05:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:08.110 04:05:16 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:08.110 04:05:16 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:10:08.110 04:05:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.110 04:05:16 accel -- common/autotest_common.sh@10 -- # set +x 00:10:08.110 ************************************ 00:10:08.110 START TEST accel_decomp_mcore 00:10:08.110 ************************************ 00:10:08.110 04:05:16 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:08.110 04:05:16 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:08.110 04:05:16 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:08.110 04:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:08.110 04:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:08.110 04:05:16 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:08.110 04:05:16 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:08.110 04:05:16 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:08.110 04:05:16 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:08.110 04:05:16 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:08.110 04:05:16 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:08.110 04:05:16 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:08.110 04:05:16 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:08.110 04:05:16 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:08.110 04:05:16 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:08.110 [2024-07-23 04:05:16.590724] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:08.110 [2024-07-23 04:05:16.590825] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2569729 ] 00:10:08.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.110 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:08.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.110 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:08.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.110 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:08.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.110 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:08.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.110 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:08.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.110 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:08.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.110 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:08.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.110 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:08.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.110 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:08.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.110 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:08.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.110 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:08.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.110 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:08.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.110 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:08.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.110 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:08.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.110 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:08.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.110 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:08.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.110 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:08.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.110 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:08.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.110 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:08.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.110 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:08.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.110 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:08.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.111 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:08.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.111 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:08.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.111 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:08.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.111 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:08.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.111 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:08.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.111 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:08.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.111 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:08.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.111 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:08.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.111 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:08.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.111 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:08.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.111 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:08.111 [2024-07-23 04:05:16.816666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:08.370 [2024-07-23 04:05:17.117166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.370 [2024-07-23 04:05:17.117229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.370 [2024-07-23 04:05:17.117295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.370 [2024-07-23 04:05:17.117304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:08.938 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:08.938 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:08.938 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:08.938 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:08.938 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:08.938 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:08.938 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:08.938 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:08.939 04:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:11.474 00:10:11.474 real 0m3.241s 00:10:11.474 user 0m0.026s 00:10:11.474 sys 0m0.005s 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:11.474 04:05:19 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:11.474 ************************************ 00:10:11.474 END TEST accel_decomp_mcore 00:10:11.474 ************************************ 00:10:11.474 04:05:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:11.474 04:05:19 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:11.474 04:05:19 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:11.474 04:05:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.474 04:05:19 accel -- common/autotest_common.sh@10 -- # set +x 00:10:11.474 ************************************ 00:10:11.474 START TEST accel_decomp_full_mcore 00:10:11.474 ************************************ 00:10:11.474 04:05:19 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:11.474 04:05:19 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:11.474 04:05:19 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:11.474 04:05:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:11.474 04:05:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:11.474 04:05:19 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:11.474 04:05:19 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:11.474 04:05:19 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:11.474 04:05:19 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:11.474 04:05:19 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:11.474 04:05:19 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:11.474 04:05:19 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:11.474 04:05:19 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:11.474 04:05:19 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:11.474 04:05:19 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:11.474 [2024-07-23 04:05:19.919277] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:11.474 [2024-07-23 04:05:19.919383] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2570299 ] 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:11.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.474 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:11.474 [2024-07-23 04:05:20.148365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:11.733 [2024-07-23 04:05:20.454105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.733 [2024-07-23 04:05:20.454188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.733 [2024-07-23 04:05:20.454241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.733 [2024-07-23 04:05:20.454248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:12.301 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:12.302 04:05:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:14.834 00:10:14.834 real 0m3.338s 00:10:14.834 user 0m9.533s 00:10:14.834 sys 0m0.329s 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:14.834 04:05:23 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:14.834 ************************************ 00:10:14.834 END TEST accel_decomp_full_mcore 00:10:14.834 ************************************ 00:10:14.834 04:05:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:14.834 04:05:23 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:14.834 04:05:23 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:10:14.834 04:05:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:14.834 04:05:23 accel -- common/autotest_common.sh@10 -- # set +x 00:10:14.834 ************************************ 00:10:14.834 START TEST accel_decomp_mthread 00:10:14.834 ************************************ 00:10:14.834 04:05:23 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:14.834 04:05:23 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:14.834 04:05:23 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:14.834 04:05:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:14.834 04:05:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:14.834 04:05:23 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:14.834 04:05:23 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:14.834 04:05:23 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:14.834 04:05:23 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:14.834 04:05:23 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:14.834 04:05:23 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:14.834 04:05:23 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:14.834 04:05:23 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:14.834 04:05:23 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:14.834 04:05:23 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:14.834 [2024-07-23 04:05:23.339293] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:14.834 [2024-07-23 04:05:23.339403] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2571018 ] 00:10:14.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.834 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:14.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.834 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:14.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.834 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:14.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.834 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:14.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.834 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:14.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.834 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:14.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.834 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:14.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.834 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:14.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:14.835 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:14.835 [2024-07-23 04:05:23.566551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.093 [2024-07-23 04:05:23.849377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:15.660 04:05:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:18.192 00:10:18.192 real 0m3.274s 00:10:18.192 user 0m2.974s 00:10:18.192 sys 0m0.303s 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:18.192 04:05:26 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:18.192 ************************************ 00:10:18.192 END TEST accel_decomp_mthread 00:10:18.192 ************************************ 00:10:18.192 04:05:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:18.192 04:05:26 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:18.192 04:05:26 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:18.192 04:05:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.192 04:05:26 accel -- common/autotest_common.sh@10 -- # set +x 00:10:18.192 ************************************ 00:10:18.192 START TEST accel_decomp_full_mthread 00:10:18.192 ************************************ 00:10:18.192 04:05:26 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:18.192 04:05:26 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:18.192 04:05:26 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:18.192 04:05:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:18.192 04:05:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:18.192 04:05:26 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:18.192 04:05:26 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:18.192 04:05:26 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:18.192 04:05:26 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:18.192 04:05:26 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:18.192 04:05:26 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:18.192 04:05:26 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:18.192 04:05:26 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:18.192 04:05:26 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:18.192 04:05:26 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:18.192 [2024-07-23 04:05:26.690374] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:18.192 [2024-07-23 04:05:26.690473] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2571628 ] 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.192 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:18.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.193 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:18.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.193 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:18.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.193 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:18.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.193 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:18.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.193 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:18.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:18.193 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:18.193 [2024-07-23 04:05:26.915024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.452 [2024-07-23 04:05:27.194983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.019 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:19.019 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.019 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.019 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.019 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:19.019 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:19.020 04:05:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:21.555 00:10:21.555 real 0m3.243s 00:10:21.555 user 0m2.935s 00:10:21.555 sys 0m0.310s 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:21.555 04:05:29 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:21.555 ************************************ 00:10:21.555 END TEST accel_decomp_full_mthread 00:10:21.555 ************************************ 00:10:21.555 04:05:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:21.555 04:05:29 accel -- accel/accel.sh@124 -- # [[ y == y ]] 00:10:21.555 04:05:29 accel -- accel/accel.sh@125 -- # COMPRESSDEV=1 00:10:21.555 04:05:29 accel -- accel/accel.sh@126 -- # get_expected_opcs 00:10:21.555 04:05:29 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:21.555 04:05:29 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2572171 00:10:21.555 04:05:29 accel -- accel/accel.sh@63 -- # waitforlisten 2572171 00:10:21.555 04:05:29 accel -- common/autotest_common.sh@829 -- # '[' -z 2572171 ']' 00:10:21.555 04:05:29 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.555 04:05:29 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:21.555 04:05:29 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:10:21.555 04:05:29 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.555 04:05:29 accel -- accel/accel.sh@61 -- # build_accel_config 00:10:21.555 04:05:29 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:21.555 04:05:29 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:21.555 04:05:29 accel -- common/autotest_common.sh@10 -- # set +x 00:10:21.555 04:05:29 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:21.555 04:05:29 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:21.555 04:05:29 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:21.555 04:05:29 accel -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:21.555 04:05:29 accel -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:21.555 04:05:29 accel -- accel/accel.sh@40 -- # local IFS=, 00:10:21.555 04:05:29 accel -- accel/accel.sh@41 -- # jq -r . 00:10:21.555 [2024-07-23 04:05:30.036332] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:21.555 [2024-07-23 04:05:30.036456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2572171 ] 00:10:21.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.555 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:21.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.555 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:21.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.555 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:21.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.555 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:21.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.555 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:21.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.555 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:21.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.555 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:21.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.555 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:21.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.555 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:21.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.556 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.556 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.556 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.556 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.556 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.556 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.556 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.556 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.556 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.556 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.556 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.556 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.556 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.556 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.556 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.556 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.556 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.556 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.556 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.556 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.556 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.556 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.556 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:21.556 [2024-07-23 04:05:30.260897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.815 [2024-07-23 04:05:30.518686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.197 [2024-07-23 04:05:31.912969] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:24.134 04:05:32 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:24.134 04:05:32 accel -- common/autotest_common.sh@862 -- # return 0 00:10:24.134 04:05:32 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:10:24.134 04:05:32 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:10:24.134 04:05:32 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:10:24.134 04:05:32 accel -- accel/accel.sh@68 -- # [[ -n 1 ]] 00:10:24.134 04:05:32 accel -- accel/accel.sh@68 -- # check_save_config compressdev_scan_accel_module 00:10:24.134 04:05:32 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:10:24.134 04:05:32 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.134 04:05:32 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:10:24.134 04:05:32 accel -- common/autotest_common.sh@10 -- # set +x 00:10:24.134 04:05:32 accel -- accel/accel.sh@56 -- # grep compressdev_scan_accel_module 00:10:24.393 04:05:32 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.393 "method": "compressdev_scan_accel_module", 00:10:24.393 04:05:32 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:24.393 04:05:32 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:10:24.393 04:05:32 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:24.393 04:05:32 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.393 04:05:32 accel -- common/autotest_common.sh@10 -- # set +x 00:10:24.393 04:05:32 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.393 04:05:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.393 04:05:32 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.393 04:05:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.393 04:05:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.394 04:05:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.394 04:05:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.394 04:05:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.394 04:05:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.394 04:05:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.394 04:05:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.394 04:05:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.394 04:05:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.394 04:05:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.394 04:05:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.394 04:05:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.394 04:05:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:10:24.394 04:05:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.394 04:05:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:10:24.394 04:05:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.394 04:05:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.394 04:05:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.394 04:05:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.394 04:05:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.394 04:05:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.394 04:05:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.394 04:05:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.394 04:05:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.394 04:05:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.394 04:05:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.394 04:05:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.394 04:05:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # IFS== 00:10:24.394 04:05:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:24.394 04:05:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:24.394 04:05:32 accel -- accel/accel.sh@75 -- # killprocess 2572171 00:10:24.394 04:05:32 accel -- common/autotest_common.sh@948 -- # '[' -z 2572171 ']' 00:10:24.394 04:05:32 accel -- common/autotest_common.sh@952 -- # kill -0 2572171 00:10:24.394 04:05:32 accel -- common/autotest_common.sh@953 -- # uname 00:10:24.394 04:05:33 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:24.394 04:05:33 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2572171 00:10:24.394 04:05:33 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:24.394 04:05:33 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:24.394 04:05:33 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2572171' 00:10:24.394 killing process with pid 2572171 00:10:24.394 04:05:33 accel -- common/autotest_common.sh@967 -- # kill 2572171 00:10:24.394 04:05:33 accel -- common/autotest_common.sh@972 -- # wait 2572171 00:10:27.720 04:05:35 accel -- accel/accel.sh@76 -- # trap - ERR 00:10:27.720 04:05:35 accel -- accel/accel.sh@127 -- # run_test accel_cdev_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:27.720 04:05:35 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:10:27.720 04:05:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.720 04:05:35 accel -- common/autotest_common.sh@10 -- # set +x 00:10:27.720 ************************************ 00:10:27.720 START TEST accel_cdev_comp 00:10:27.720 ************************************ 00:10:27.720 04:05:35 accel.accel_cdev_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:27.720 04:05:35 accel.accel_cdev_comp -- accel/accel.sh@16 -- # local accel_opc 00:10:27.720 04:05:35 accel.accel_cdev_comp -- accel/accel.sh@17 -- # local accel_module 00:10:27.720 04:05:35 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:27.720 04:05:35 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:27.720 04:05:35 accel.accel_cdev_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:27.720 04:05:35 accel.accel_cdev_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:27.720 04:05:35 accel.accel_cdev_comp -- accel/accel.sh@12 -- # build_accel_config 00:10:27.720 04:05:35 accel.accel_cdev_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:27.720 04:05:35 accel.accel_cdev_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:27.720 04:05:35 accel.accel_cdev_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:27.720 04:05:35 accel.accel_cdev_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:27.720 04:05:35 accel.accel_cdev_comp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:27.720 04:05:35 accel.accel_cdev_comp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:27.720 04:05:35 accel.accel_cdev_comp -- accel/accel.sh@40 -- # local IFS=, 00:10:27.720 04:05:35 accel.accel_cdev_comp -- accel/accel.sh@41 -- # jq -r . 00:10:27.720 [2024-07-23 04:05:35.952706] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:27.720 [2024-07-23 04:05:35.952807] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2573237 ] 00:10:27.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.720 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:27.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:27.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.721 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:27.721 [2024-07-23 04:05:36.180864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.721 [2024-07-23 04:05:36.435475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.099 [2024-07-23 04:05:37.757706] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:29.099 [2024-07-23 04:05:37.760774] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016140 PMD being used: compress_qat 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=0x1 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:29.099 [2024-07-23 04:05:37.769156] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016220 PMD being used: compress_qat 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=compress 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=1 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=No 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:29.099 04:05:37 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:10:31.004 04:05:39 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:31.004 00:10:31.004 real 0m3.611s 00:10:31.004 user 0m2.967s 00:10:31.004 sys 0m0.645s 00:10:31.004 04:05:39 accel.accel_cdev_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.004 04:05:39 accel.accel_cdev_comp -- common/autotest_common.sh@10 -- # set +x 00:10:31.004 ************************************ 00:10:31.004 END TEST accel_cdev_comp 00:10:31.004 ************************************ 00:10:31.004 04:05:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:31.004 04:05:39 accel -- accel/accel.sh@128 -- # run_test accel_cdev_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:31.004 04:05:39 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:31.004 04:05:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.004 04:05:39 accel -- common/autotest_common.sh@10 -- # set +x 00:10:31.004 ************************************ 00:10:31.004 START TEST accel_cdev_decomp 00:10:31.004 ************************************ 00:10:31.004 04:05:39 accel.accel_cdev_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:31.004 04:05:39 accel.accel_cdev_decomp -- accel/accel.sh@16 -- # local accel_opc 00:10:31.004 04:05:39 accel.accel_cdev_decomp -- accel/accel.sh@17 -- # local accel_module 00:10:31.004 04:05:39 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:31.004 04:05:39 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:31.004 04:05:39 accel.accel_cdev_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:31.004 04:05:39 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:31.004 04:05:39 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # build_accel_config 00:10:31.004 04:05:39 accel.accel_cdev_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:31.004 04:05:39 accel.accel_cdev_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:31.004 04:05:39 accel.accel_cdev_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:31.004 04:05:39 accel.accel_cdev_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:31.004 04:05:39 accel.accel_cdev_decomp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:31.004 04:05:39 accel.accel_cdev_decomp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:31.004 04:05:39 accel.accel_cdev_decomp -- accel/accel.sh@40 -- # local IFS=, 00:10:31.004 04:05:39 accel.accel_cdev_decomp -- accel/accel.sh@41 -- # jq -r . 00:10:31.004 [2024-07-23 04:05:39.644196] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:31.004 [2024-07-23 04:05:39.644295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2573799 ] 00:10:31.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.004 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:31.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.004 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:31.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.004 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:31.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.004 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:31.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.004 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:31.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.004 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:31.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.004 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:31.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.004 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:31.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.004 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:31.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.004 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:31.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.004 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:31.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.004 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:31.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.004 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:31.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.004 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:31.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.004 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:31.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.005 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:31.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.005 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:31.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.005 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:31.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.005 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:31.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.005 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:31.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.005 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:31.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.005 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:31.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.005 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:31.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.005 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:31.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.005 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:31.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.005 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:31.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.005 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:31.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.005 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:31.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.005 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:31.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.005 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:31.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.005 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:31.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.005 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:31.264 [2024-07-23 04:05:39.871525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.522 [2024-07-23 04:05:40.134802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.898 [2024-07-23 04:05:41.479547] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:32.898 [2024-07-23 04:05:41.482581] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016140 PMD being used: compress_qat 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=0x1 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.898 [2024-07-23 04:05:41.490431] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016220 PMD being used: compress_qat 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=decompress 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.898 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=1 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=Yes 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:32.899 04:05:41 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:34.802 00:10:34.802 real 0m3.640s 00:10:34.802 user 0m2.971s 00:10:34.802 sys 0m0.666s 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:34.802 04:05:43 accel.accel_cdev_decomp -- common/autotest_common.sh@10 -- # set +x 00:10:34.802 ************************************ 00:10:34.802 END TEST accel_cdev_decomp 00:10:34.802 ************************************ 00:10:34.802 04:05:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:34.802 04:05:43 accel -- accel/accel.sh@129 -- # run_test accel_cdev_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:34.802 04:05:43 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:10:34.802 04:05:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:34.802 04:05:43 accel -- common/autotest_common.sh@10 -- # set +x 00:10:34.802 ************************************ 00:10:34.802 START TEST accel_cdev_decomp_full 00:10:34.802 ************************************ 00:10:34.802 04:05:43 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:34.802 04:05:43 accel.accel_cdev_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:10:34.802 04:05:43 accel.accel_cdev_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:10:34.802 04:05:43 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:34.802 04:05:43 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:34.802 04:05:43 accel.accel_cdev_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:34.802 04:05:43 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:34.802 04:05:43 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:10:34.802 04:05:43 accel.accel_cdev_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:34.802 04:05:43 accel.accel_cdev_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:34.802 04:05:43 accel.accel_cdev_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:34.802 04:05:43 accel.accel_cdev_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:34.802 04:05:43 accel.accel_cdev_decomp_full -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:34.802 04:05:43 accel.accel_cdev_decomp_full -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:34.802 04:05:43 accel.accel_cdev_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:10:34.802 04:05:43 accel.accel_cdev_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:10:34.802 [2024-07-23 04:05:43.372577] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:34.802 [2024-07-23 04:05:43.372686] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2574421 ] 00:10:34.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.802 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:34.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.802 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:34.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.802 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:34.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.802 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:34.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.802 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:34.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.802 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:34.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.802 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:34.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.802 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:34.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.802 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:34.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.802 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:34.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.802 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:34.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.802 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:34.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.802 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:34.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.802 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:34.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.802 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:34.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.802 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:34.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.802 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:34.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.803 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:34.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.803 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:34.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.803 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:34.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.803 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:34.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.803 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:34.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.803 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:34.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.803 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:34.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.803 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:34.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.803 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:34.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.803 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:34.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.803 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:34.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.803 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:34.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.803 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:34.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.803 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:34.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:34.803 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:35.065 [2024-07-23 04:05:43.598239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.327 [2024-07-23 04:05:43.862254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.703 [2024-07-23 04:05:45.246112] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:36.703 [2024-07-23 04:05:45.249212] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016140 PMD being used: compress_qat 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.703 [2024-07-23 04:05:45.257242] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016220 PMD being used: compress_qat 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:36.703 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=1 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.704 04:05:45 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:38.607 00:10:38.607 real 0m3.714s 00:10:38.607 user 0m3.053s 00:10:38.607 sys 0m0.660s 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:38.607 04:05:47 accel.accel_cdev_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:10:38.607 ************************************ 00:10:38.607 END TEST accel_cdev_decomp_full 00:10:38.607 ************************************ 00:10:38.607 04:05:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:38.607 04:05:47 accel -- accel/accel.sh@130 -- # run_test accel_cdev_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:38.607 04:05:47 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:10:38.607 04:05:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:38.607 04:05:47 accel -- common/autotest_common.sh@10 -- # set +x 00:10:38.607 ************************************ 00:10:38.607 START TEST accel_cdev_decomp_mcore 00:10:38.607 ************************************ 00:10:38.607 04:05:47 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:38.607 04:05:47 accel.accel_cdev_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:38.607 04:05:47 accel.accel_cdev_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:38.607 04:05:47 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:38.607 04:05:47 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:38.607 04:05:47 accel.accel_cdev_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:38.607 04:05:47 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:38.607 04:05:47 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:38.607 04:05:47 accel.accel_cdev_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:38.607 04:05:47 accel.accel_cdev_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:38.607 04:05:47 accel.accel_cdev_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:38.607 04:05:47 accel.accel_cdev_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:38.607 04:05:47 accel.accel_cdev_decomp_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:38.607 04:05:47 accel.accel_cdev_decomp_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:38.607 04:05:47 accel.accel_cdev_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:38.607 04:05:47 accel.accel_cdev_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:38.607 [2024-07-23 04:05:47.170999] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:38.607 [2024-07-23 04:05:47.171105] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2575144 ] 00:10:38.607 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.607 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:38.607 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.607 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:38.607 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.607 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:38.607 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.607 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:38.607 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.607 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:38.607 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.607 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:38.607 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.607 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:38.607 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.607 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:38.607 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.607 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:38.607 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.608 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:38.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.608 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:38.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.608 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:38.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.608 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:38.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.608 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:38.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.608 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:38.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.608 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:38.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.608 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:38.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.608 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:38.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.608 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:38.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.608 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:38.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.608 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:38.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.608 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:38.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.608 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:38.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.608 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:38.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.608 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:38.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.608 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:38.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.608 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:38.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.608 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:38.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.608 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:38.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.608 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:38.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.608 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:38.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.608 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:38.867 [2024-07-23 04:05:47.399085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.125 [2024-07-23 04:05:47.673172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.125 [2024-07-23 04:05:47.673231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.125 [2024-07-23 04:05:47.673293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.125 [2024-07-23 04:05:47.673301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.502 [2024-07-23 04:05:49.030746] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:40.502 [2024-07-23 04:05:49.033931] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00002d1a0 PMD being used: compress_qat 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.502 [2024-07-23 04:05:49.044109] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000010100 PMD being used: compress_qat 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.502 [2024-07-23 04:05:49.046010] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000017100 PMD being used: compress_qat 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:40.502 [2024-07-23 04:05:49.050353] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000020160 PMD being used: compress_qat 00:10:40.502 [2024-07-23 04:05:49.050532] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00002d280 PMD being used: compress_qat 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.502 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.503 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:40.503 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.503 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.503 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.503 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:40.503 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.503 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.503 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.503 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:40.503 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.503 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.503 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.503 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:40.503 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.503 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.503 04:05:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:42.403 00:10:42.403 real 0m3.836s 00:10:42.403 user 0m0.028s 00:10:42.403 sys 0m0.004s 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:42.403 04:05:50 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:42.403 ************************************ 00:10:42.403 END TEST accel_cdev_decomp_mcore 00:10:42.403 ************************************ 00:10:42.403 04:05:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:42.403 04:05:50 accel -- accel/accel.sh@131 -- # run_test accel_cdev_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:42.403 04:05:50 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:42.403 04:05:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.403 04:05:50 accel -- common/autotest_common.sh@10 -- # set +x 00:10:42.403 ************************************ 00:10:42.403 START TEST accel_cdev_decomp_full_mcore 00:10:42.403 ************************************ 00:10:42.403 04:05:51 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:42.403 04:05:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:42.403 04:05:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:42.403 04:05:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:42.403 04:05:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:42.403 04:05:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:42.403 04:05:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:42.403 04:05:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:42.403 04:05:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:42.403 04:05:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:42.403 04:05:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:42.403 04:05:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:42.403 04:05:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:42.403 04:05:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:42.403 04:05:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:42.403 04:05:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:42.403 [2024-07-23 04:05:51.086750] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:42.403 [2024-07-23 04:05:51.086856] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2575743 ] 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:42.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.662 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:42.662 [2024-07-23 04:05:51.313050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.920 [2024-07-23 04:05:51.590640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.920 [2024-07-23 04:05:51.590713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.920 [2024-07-23 04:05:51.590776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.920 [2024-07-23 04:05:51.590784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.297 [2024-07-23 04:05:52.964885] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:44.297 [2024-07-23 04:05:52.968094] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00002d1a0 PMD being used: compress_qat 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:44.297 [2024-07-23 04:05:52.978145] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000010100 PMD being used: compress_qat 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.297 [2024-07-23 04:05:52.980158] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000017100 PMD being used: compress_qat 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:44.297 [2024-07-23 04:05:52.984294] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000020160 PMD being used: compress_qat 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.297 [2024-07-23 04:05:52.984508] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00002d280 PMD being used: compress_qat 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:44.297 04:05:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.202 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:46.202 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.202 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.202 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.202 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:46.202 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.202 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.202 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.202 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:46.202 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.202 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.202 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.202 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:46.202 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.202 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.202 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.202 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:46.203 00:10:46.203 real 0m3.897s 00:10:46.203 user 0m11.495s 00:10:46.203 sys 0m0.684s 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.203 04:05:54 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:46.203 ************************************ 00:10:46.203 END TEST accel_cdev_decomp_full_mcore 00:10:46.203 ************************************ 00:10:46.203 04:05:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:46.203 04:05:54 accel -- accel/accel.sh@132 -- # run_test accel_cdev_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:46.203 04:05:54 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:10:46.203 04:05:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.203 04:05:54 accel -- common/autotest_common.sh@10 -- # set +x 00:10:46.462 ************************************ 00:10:46.462 START TEST accel_cdev_decomp_mthread 00:10:46.462 ************************************ 00:10:46.462 04:05:55 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:46.462 04:05:55 accel.accel_cdev_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:46.462 04:05:55 accel.accel_cdev_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:46.462 04:05:55 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:46.462 04:05:55 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:46.462 04:05:55 accel.accel_cdev_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:46.462 04:05:55 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:46.462 04:05:55 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:46.462 04:05:55 accel.accel_cdev_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:46.462 04:05:55 accel.accel_cdev_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:46.462 04:05:55 accel.accel_cdev_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:46.462 04:05:55 accel.accel_cdev_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:46.462 04:05:55 accel.accel_cdev_decomp_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:46.462 04:05:55 accel.accel_cdev_decomp_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:46.462 04:05:55 accel.accel_cdev_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:46.462 04:05:55 accel.accel_cdev_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:46.462 [2024-07-23 04:05:55.066005] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:46.462 [2024-07-23 04:05:55.066113] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2576507 ] 00:10:46.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.462 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:46.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.462 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:46.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.462 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:46.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.462 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:46.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.462 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:46.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.462 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:46.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.462 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:46.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:46.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:46.463 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:46.722 [2024-07-23 04:05:55.291357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.010 [2024-07-23 04:05:55.570321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.388 [2024-07-23 04:05:56.957540] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:48.388 [2024-07-23 04:05:56.960638] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016140 PMD being used: compress_qat 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:48.388 [2024-07-23 04:05:56.971329] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016220 PMD being used: compress_qat 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:48.388 [2024-07-23 04:05:56.975616] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016300 PMD being used: compress_qat 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:48.388 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:48.389 04:05:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:50.294 00:10:50.294 real 0m3.719s 00:10:50.294 user 0m3.043s 00:10:50.294 sys 0m0.676s 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:50.294 04:05:58 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:50.294 ************************************ 00:10:50.294 END TEST accel_cdev_decomp_mthread 00:10:50.294 ************************************ 00:10:50.294 04:05:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:50.294 04:05:58 accel -- accel/accel.sh@133 -- # run_test accel_cdev_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:50.294 04:05:58 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:50.294 04:05:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.294 04:05:58 accel -- common/autotest_common.sh@10 -- # set +x 00:10:50.294 ************************************ 00:10:50.294 START TEST accel_cdev_decomp_full_mthread 00:10:50.294 ************************************ 00:10:50.294 04:05:58 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:50.294 04:05:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:50.294 04:05:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:50.294 04:05:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:50.294 04:05:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:50.294 04:05:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:50.294 04:05:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:50.294 04:05:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:50.294 04:05:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:50.294 04:05:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:50.294 04:05:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:50.294 04:05:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:50.294 04:05:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:50.294 04:05:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:50.294 04:05:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:50.295 04:05:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:50.295 [2024-07-23 04:05:58.870974] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:50.295 [2024-07-23 04:05:58.871080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2577076 ] 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:50.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.295 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:50.554 [2024-07-23 04:05:59.094289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.813 [2024-07-23 04:05:59.382982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.190 [2024-07-23 04:06:00.788471] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:52.190 [2024-07-23 04:06:00.791518] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016140 PMD being used: compress_qat 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:52.190 [2024-07-23 04:06:00.801905] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016220 PMD being used: compress_qat 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:52.190 [2024-07-23 04:06:00.811243] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016300 PMD being used: compress_qat 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:52.190 04:06:00 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:54.094 00:10:54.094 real 0m3.761s 00:10:54.094 user 0m3.085s 00:10:54.094 sys 0m0.675s 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:54.094 04:06:02 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:54.094 ************************************ 00:10:54.094 END TEST accel_cdev_decomp_full_mthread 00:10:54.094 ************************************ 00:10:54.094 04:06:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:54.094 04:06:02 accel -- accel/accel.sh@134 -- # unset COMPRESSDEV 00:10:54.094 04:06:02 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:54.094 04:06:02 accel -- accel/accel.sh@137 -- # build_accel_config 00:10:54.094 04:06:02 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:54.094 04:06:02 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:54.094 04:06:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.094 04:06:02 accel -- common/autotest_common.sh@10 -- # set +x 00:10:54.094 04:06:02 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:54.094 04:06:02 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:54.094 04:06:02 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:54.094 04:06:02 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:54.094 04:06:02 accel -- accel/accel.sh@40 -- # local IFS=, 00:10:54.094 04:06:02 accel -- accel/accel.sh@41 -- # jq -r . 00:10:54.094 ************************************ 00:10:54.094 START TEST accel_dif_functional_tests 00:10:54.094 ************************************ 00:10:54.094 04:06:02 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:54.094 [2024-07-23 04:06:02.753737] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:54.094 [2024-07-23 04:06:02.753843] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2577987 ] 00:10:54.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:54.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.354 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:54.354 [2024-07-23 04:06:02.976328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:54.613 [2024-07-23 04:06:03.267711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.613 [2024-07-23 04:06:03.267776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.613 [2024-07-23 04:06:03.267781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.180 00:10:55.180 00:10:55.180 CUnit - A unit testing framework for C - Version 2.1-3 00:10:55.180 http://cunit.sourceforge.net/ 00:10:55.180 00:10:55.180 00:10:55.180 Suite: accel_dif 00:10:55.180 Test: verify: DIF generated, GUARD check ...passed 00:10:55.180 Test: verify: DIF generated, APPTAG check ...passed 00:10:55.180 Test: verify: DIF generated, REFTAG check ...passed 00:10:55.180 Test: verify: DIF not generated, GUARD check ...[2024-07-23 04:06:03.763164] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:55.180 passed 00:10:55.180 Test: verify: DIF not generated, APPTAG check ...[2024-07-23 04:06:03.763269] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:55.180 passed 00:10:55.180 Test: verify: DIF not generated, REFTAG check ...[2024-07-23 04:06:03.763322] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:55.180 passed 00:10:55.180 Test: verify: APPTAG correct, APPTAG check ...passed 00:10:55.180 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-23 04:06:03.763430] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:10:55.180 passed 00:10:55.180 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:10:55.180 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:10:55.180 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:10:55.180 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-23 04:06:03.763652] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:10:55.180 passed 00:10:55.180 Test: verify copy: DIF generated, GUARD check ...passed 00:10:55.180 Test: verify copy: DIF generated, APPTAG check ...passed 00:10:55.180 Test: verify copy: DIF generated, REFTAG check ...passed 00:10:55.180 Test: verify copy: DIF not generated, GUARD check ...[2024-07-23 04:06:03.763916] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:55.180 passed 00:10:55.180 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-23 04:06:03.763979] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:55.180 passed 00:10:55.180 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-23 04:06:03.764046] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:55.180 passed 00:10:55.180 Test: generate copy: DIF generated, GUARD check ...passed 00:10:55.180 Test: generate copy: DIF generated, APTTAG check ...passed 00:10:55.180 Test: generate copy: DIF generated, REFTAG check ...passed 00:10:55.180 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:10:55.180 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:10:55.180 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:10:55.180 Test: generate copy: iovecs-len validate ...[2024-07-23 04:06:03.764470] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:10:55.180 passed 00:10:55.180 Test: generate copy: buffer alignment validate ...passed 00:10:55.180 00:10:55.180 Run Summary: Type Total Ran Passed Failed Inactive 00:10:55.180 suites 1 1 n/a 0 0 00:10:55.180 tests 26 26 26 0 0 00:10:55.180 asserts 115 115 115 0 n/a 00:10:55.180 00:10:55.180 Elapsed time = 0.005 seconds 00:10:57.082 00:10:57.082 real 0m2.876s 00:10:57.082 user 0m5.852s 00:10:57.082 sys 0m0.362s 00:10:57.082 04:06:05 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:57.082 04:06:05 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:10:57.082 ************************************ 00:10:57.082 END TEST accel_dif_functional_tests 00:10:57.082 ************************************ 00:10:57.082 04:06:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:57.082 00:10:57.082 real 1m52.395s 00:10:57.082 user 2m10.615s 00:10:57.082 sys 0m15.800s 00:10:57.082 04:06:05 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:57.082 04:06:05 accel -- common/autotest_common.sh@10 -- # set +x 00:10:57.082 ************************************ 00:10:57.082 END TEST accel 00:10:57.082 ************************************ 00:10:57.082 04:06:05 -- common/autotest_common.sh@1142 -- # return 0 00:10:57.082 04:06:05 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:57.082 04:06:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:57.082 04:06:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:57.082 04:06:05 -- common/autotest_common.sh@10 -- # set +x 00:10:57.082 ************************************ 00:10:57.082 START TEST accel_rpc 00:10:57.082 ************************************ 00:10:57.082 04:06:05 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:57.083 * Looking for test storage... 00:10:57.083 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:10:57.083 04:06:05 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:57.083 04:06:05 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2578968 00:10:57.083 04:06:05 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:10:57.083 04:06:05 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2578968 00:10:57.083 04:06:05 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2578968 ']' 00:10:57.083 04:06:05 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.083 04:06:05 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:57.083 04:06:05 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.083 04:06:05 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:57.083 04:06:05 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.083 [2024-07-23 04:06:05.840066] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:57.083 [2024-07-23 04:06:05.840167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2578968 ] 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:57.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.342 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:57.342 [2024-07-23 04:06:06.037528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.600 [2024-07-23 04:06:06.313682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.168 04:06:06 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:58.168 04:06:06 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:58.168 04:06:06 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:10:58.168 04:06:06 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:10:58.168 04:06:06 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:10:58.168 04:06:06 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:10:58.168 04:06:06 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:10:58.168 04:06:06 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:58.168 04:06:06 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.168 04:06:06 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.168 ************************************ 00:10:58.168 START TEST accel_assign_opcode 00:10:58.168 ************************************ 00:10:58.168 04:06:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:10:58.168 04:06:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:10:58.168 04:06:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.168 04:06:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:58.168 [2024-07-23 04:06:06.727583] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:10:58.168 04:06:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.168 04:06:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:10:58.168 04:06:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.168 04:06:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:58.168 [2024-07-23 04:06:06.735552] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:10:58.168 04:06:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.168 04:06:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:10:58.168 04:06:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.168 04:06:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:59.545 04:06:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.545 04:06:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:10:59.545 04:06:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.545 04:06:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:59.545 04:06:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:10:59.545 04:06:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:10:59.545 04:06:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.545 software 00:10:59.545 00:10:59.545 real 0m1.240s 00:10:59.545 user 0m0.039s 00:10:59.545 sys 0m0.014s 00:10:59.545 04:06:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:59.545 04:06:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:59.545 ************************************ 00:10:59.545 END TEST accel_assign_opcode 00:10:59.545 ************************************ 00:10:59.545 04:06:08 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:10:59.545 04:06:08 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2578968 00:10:59.545 04:06:08 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2578968 ']' 00:10:59.545 04:06:08 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2578968 00:10:59.545 04:06:08 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:10:59.545 04:06:08 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:59.545 04:06:08 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2578968 00:10:59.545 04:06:08 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:59.545 04:06:08 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:59.545 04:06:08 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2578968' 00:10:59.545 killing process with pid 2578968 00:10:59.545 04:06:08 accel_rpc -- common/autotest_common.sh@967 -- # kill 2578968 00:10:59.545 04:06:08 accel_rpc -- common/autotest_common.sh@972 -- # wait 2578968 00:11:02.831 00:11:02.831 real 0m5.709s 00:11:02.831 user 0m5.593s 00:11:02.831 sys 0m0.688s 00:11:02.831 04:06:11 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:02.831 04:06:11 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.831 ************************************ 00:11:02.831 END TEST accel_rpc 00:11:02.831 ************************************ 00:11:02.831 04:06:11 -- common/autotest_common.sh@1142 -- # return 0 00:11:02.831 04:06:11 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:11:02.831 04:06:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:02.831 04:06:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:02.831 04:06:11 -- common/autotest_common.sh@10 -- # set +x 00:11:02.831 ************************************ 00:11:02.831 START TEST app_cmdline 00:11:02.831 ************************************ 00:11:02.831 04:06:11 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:11:02.831 * Looking for test storage... 00:11:02.831 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:11:02.831 04:06:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:02.831 04:06:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2580015 00:11:02.831 04:06:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2580015 00:11:02.831 04:06:11 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:02.831 04:06:11 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2580015 ']' 00:11:02.831 04:06:11 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.831 04:06:11 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:02.831 04:06:11 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.831 04:06:11 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:02.831 04:06:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:03.090 [2024-07-23 04:06:11.680541] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:03.090 [2024-07-23 04:06:11.680665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2580015 ] 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:03.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.090 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:03.349 [2024-07-23 04:06:11.907593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.608 [2024-07-23 04:06:12.185554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.984 04:06:13 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:04.984 04:06:13 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:11:04.984 04:06:13 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:11:04.984 { 00:11:04.984 "version": "SPDK v24.09-pre git sha1 f7b31b2b9", 00:11:04.984 "fields": { 00:11:04.984 "major": 24, 00:11:04.984 "minor": 9, 00:11:04.984 "patch": 0, 00:11:04.984 "suffix": "-pre", 00:11:04.984 "commit": "f7b31b2b9" 00:11:04.984 } 00:11:04.984 } 00:11:04.984 04:06:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:04.984 04:06:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:04.984 04:06:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:04.984 04:06:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:04.984 04:06:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:04.984 04:06:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:04.984 04:06:13 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.984 04:06:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:04.984 04:06:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:04.984 04:06:13 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.984 04:06:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:04.984 04:06:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:04.984 04:06:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:04.984 04:06:13 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:11:04.984 04:06:13 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:04.984 04:06:13 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:04.984 04:06:13 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:04.984 04:06:13 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:04.984 04:06:13 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:04.984 04:06:13 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:04.984 04:06:13 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:04.984 04:06:13 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:04.984 04:06:13 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:11:04.984 04:06:13 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:05.245 request: 00:11:05.245 { 00:11:05.245 "method": "env_dpdk_get_mem_stats", 00:11:05.245 "req_id": 1 00:11:05.245 } 00:11:05.245 Got JSON-RPC error response 00:11:05.245 response: 00:11:05.245 { 00:11:05.245 "code": -32601, 00:11:05.245 "message": "Method not found" 00:11:05.245 } 00:11:05.245 04:06:13 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:11:05.245 04:06:13 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:05.245 04:06:13 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:05.245 04:06:13 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:05.245 04:06:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2580015 00:11:05.245 04:06:13 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2580015 ']' 00:11:05.245 04:06:13 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2580015 00:11:05.245 04:06:13 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:11:05.245 04:06:13 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:05.245 04:06:13 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2580015 00:11:05.245 04:06:13 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:05.245 04:06:13 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:05.245 04:06:13 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2580015' 00:11:05.245 killing process with pid 2580015 00:11:05.245 04:06:13 app_cmdline -- common/autotest_common.sh@967 -- # kill 2580015 00:11:05.245 04:06:13 app_cmdline -- common/autotest_common.sh@972 -- # wait 2580015 00:11:08.594 00:11:08.594 real 0m5.838s 00:11:08.594 user 0m5.979s 00:11:08.594 sys 0m0.770s 00:11:08.594 04:06:17 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:08.594 04:06:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:08.594 ************************************ 00:11:08.594 END TEST app_cmdline 00:11:08.594 ************************************ 00:11:08.594 04:06:17 -- common/autotest_common.sh@1142 -- # return 0 00:11:08.594 04:06:17 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:11:08.594 04:06:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:08.594 04:06:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:08.594 04:06:17 -- common/autotest_common.sh@10 -- # set +x 00:11:08.594 ************************************ 00:11:08.594 START TEST version 00:11:08.594 ************************************ 00:11:08.594 04:06:17 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:11:08.853 * Looking for test storage... 00:11:08.853 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:11:08.853 04:06:17 version -- app/version.sh@17 -- # get_header_version major 00:11:08.853 04:06:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:11:08.853 04:06:17 version -- app/version.sh@14 -- # cut -f2 00:11:08.853 04:06:17 version -- app/version.sh@14 -- # tr -d '"' 00:11:08.853 04:06:17 version -- app/version.sh@17 -- # major=24 00:11:08.853 04:06:17 version -- app/version.sh@18 -- # get_header_version minor 00:11:08.853 04:06:17 version -- app/version.sh@14 -- # cut -f2 00:11:08.853 04:06:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:11:08.853 04:06:17 version -- app/version.sh@14 -- # tr -d '"' 00:11:08.853 04:06:17 version -- app/version.sh@18 -- # minor=9 00:11:08.853 04:06:17 version -- app/version.sh@19 -- # get_header_version patch 00:11:08.853 04:06:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:11:08.853 04:06:17 version -- app/version.sh@14 -- # cut -f2 00:11:08.853 04:06:17 version -- app/version.sh@14 -- # tr -d '"' 00:11:08.853 04:06:17 version -- app/version.sh@19 -- # patch=0 00:11:08.853 04:06:17 version -- app/version.sh@20 -- # get_header_version suffix 00:11:08.853 04:06:17 version -- app/version.sh@14 -- # cut -f2 00:11:08.853 04:06:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:11:08.853 04:06:17 version -- app/version.sh@14 -- # tr -d '"' 00:11:08.853 04:06:17 version -- app/version.sh@20 -- # suffix=-pre 00:11:08.853 04:06:17 version -- app/version.sh@22 -- # version=24.9 00:11:08.853 04:06:17 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:08.853 04:06:17 version -- app/version.sh@28 -- # version=24.9rc0 00:11:08.853 04:06:17 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:11:08.853 04:06:17 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:08.853 04:06:17 version -- app/version.sh@30 -- # py_version=24.9rc0 00:11:08.853 04:06:17 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:11:08.853 00:11:08.853 real 0m0.189s 00:11:08.853 user 0m0.103s 00:11:08.853 sys 0m0.127s 00:11:08.853 04:06:17 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:08.853 04:06:17 version -- common/autotest_common.sh@10 -- # set +x 00:11:08.853 ************************************ 00:11:08.853 END TEST version 00:11:08.853 ************************************ 00:11:08.853 04:06:17 -- common/autotest_common.sh@1142 -- # return 0 00:11:08.853 04:06:17 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:11:08.853 04:06:17 -- spdk/autotest.sh@189 -- # run_test blockdev_general /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:11:08.853 04:06:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:08.853 04:06:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:08.853 04:06:17 -- common/autotest_common.sh@10 -- # set +x 00:11:08.853 ************************************ 00:11:08.853 START TEST blockdev_general 00:11:08.853 ************************************ 00:11:08.853 04:06:17 blockdev_general -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:11:09.112 * Looking for test storage... 00:11:09.112 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:11:09.112 04:06:17 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@673 -- # uname -s 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@681 -- # test_type=bdev 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@682 -- # crypto_device= 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@683 -- # dek= 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@684 -- # env_ctx= 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@689 -- # [[ bdev == bdev ]] 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2581184 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 2581184 00:11:09.112 04:06:17 blockdev_general -- common/autotest_common.sh@829 -- # '[' -z 2581184 ']' 00:11:09.112 04:06:17 blockdev_general -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.112 04:06:17 blockdev_general -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:09.112 04:06:17 blockdev_general -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.112 04:06:17 blockdev_general -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:09.112 04:06:17 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:09.112 04:06:17 blockdev_general -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:11:09.112 [2024-07-23 04:06:17.852316] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:09.112 [2024-07-23 04:06:17.852441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2581184 ] 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:09.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:09.372 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:09.372 [2024-07-23 04:06:18.076889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.631 [2024-07-23 04:06:18.342854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.198 04:06:18 blockdev_general -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:10.198 04:06:18 blockdev_general -- common/autotest_common.sh@862 -- # return 0 00:11:10.198 04:06:18 blockdev_general -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:11:10.198 04:06:18 blockdev_general -- bdev/blockdev.sh@695 -- # setup_bdev_conf 00:11:10.198 04:06:18 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:11:10.198 04:06:18 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.198 04:06:18 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:11.573 [2024-07-23 04:06:20.247789] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:11.573 [2024-07-23 04:06:20.247857] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:11.573 00:11:11.573 [2024-07-23 04:06:20.255763] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:11.573 [2024-07-23 04:06:20.255804] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:11.573 00:11:11.573 Malloc0 00:11:11.832 Malloc1 00:11:11.832 Malloc2 00:11:11.832 Malloc3 00:11:11.832 Malloc4 00:11:12.091 Malloc5 00:11:12.091 Malloc6 00:11:12.091 Malloc7 00:11:12.091 Malloc8 00:11:12.091 Malloc9 00:11:12.091 [2024-07-23 04:06:20.865291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:12.091 [2024-07-23 04:06:20.865353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.091 [2024-07-23 04:06:20.865382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045080 00:11:12.091 [2024-07-23 04:06:20.865401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.091 [2024-07-23 04:06:20.868072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.091 [2024-07-23 04:06:20.868106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:12.091 TestPT 00:11:12.350 04:06:20 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.350 04:06:20 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile bs=2048 count=5000 00:11:12.350 5000+0 records in 00:11:12.350 5000+0 records out 00:11:12.350 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0363803 s, 281 MB/s 00:11:12.350 04:06:20 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile AIO0 2048 00:11:12.350 04:06:20 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.350 04:06:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:12.350 AIO0 00:11:12.350 04:06:20 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.350 04:06:20 blockdev_general -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:11:12.350 04:06:20 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.350 04:06:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:12.350 04:06:20 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.350 04:06:20 blockdev_general -- bdev/blockdev.sh@739 -- # cat 00:11:12.350 04:06:20 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:11:12.350 04:06:20 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.350 04:06:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:12.350 04:06:21 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.350 04:06:21 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:11:12.350 04:06:21 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.350 04:06:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:12.350 04:06:21 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.350 04:06:21 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:12.350 04:06:21 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.350 04:06:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:12.350 04:06:21 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.350 04:06:21 blockdev_general -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:11:12.350 04:06:21 blockdev_general -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:11:12.350 04:06:21 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.350 04:06:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:12.350 04:06:21 blockdev_general -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:11:12.610 04:06:21 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.610 04:06:21 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:11:12.610 04:06:21 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r .name 00:11:12.611 04:06:21 blockdev_general -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "a6617569-84d5-45bd-bd97-0f2cbb1fd7df"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a6617569-84d5-45bd-bd97-0f2cbb1fd7df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "ff8c56ef-be1d-56bf-8d0c-3a3077549f3b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ff8c56ef-be1d-56bf-8d0c-3a3077549f3b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "442dcccd-9e1e-5ef1-86af-82bd1e0b4b0a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "442dcccd-9e1e-5ef1-86af-82bd1e0b4b0a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "a1c40ad0-c278-5297-8775-ffc59fffdfed"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a1c40ad0-c278-5297-8775-ffc59fffdfed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "6b70bb0b-3eec-5294-9171-03958ab6d4f1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6b70bb0b-3eec-5294-9171-03958ab6d4f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "11a618e7-00ba-5905-bb2a-3b8dc27a3d88"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "11a618e7-00ba-5905-bb2a-3b8dc27a3d88",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "ead308d9-10cd-565f-95bb-898c5663f021"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ead308d9-10cd-565f-95bb-898c5663f021",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "626b387c-4402-502e-8f0d-1b46fabe8854"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "626b387c-4402-502e-8f0d-1b46fabe8854",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "63ca1192-e801-527d-8cc1-e150f0225923"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "63ca1192-e801-527d-8cc1-e150f0225923",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "2ea1142e-fd93-5f4a-9599-d7c3aa53e89e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2ea1142e-fd93-5f4a-9599-d7c3aa53e89e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "514a7de2-9da9-5fee-8022-70240626f462"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "514a7de2-9da9-5fee-8022-70240626f462",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "b48d8bc4-fb76-56d9-87a9-5f46ba22eaaa"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b48d8bc4-fb76-56d9-87a9-5f46ba22eaaa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "10b9ee6c-517a-4142-a4e6-296369e49abb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "10b9ee6c-517a-4142-a4e6-296369e49abb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "10b9ee6c-517a-4142-a4e6-296369e49abb",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "616ce074-9819-4024-b9b0-95bb909a8c0e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "827ff825-4234-43ad-860d-1fddf3a09830",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "58e522e2-3ab0-42d9-b191-fc6e6c42a95e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "58e522e2-3ab0-42d9-b191-fc6e6c42a95e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "58e522e2-3ab0-42d9-b191-fc6e6c42a95e",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "209b6700-155b-4a1d-ae1f-bdced187a5f4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "d12133f4-137b-402a-8ce9-8f557dc43832",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "e886e37d-a8c2-4f10-ac11-ad07863b5133"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "e886e37d-a8c2-4f10-ac11-ad07863b5133",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e886e37d-a8c2-4f10-ac11-ad07863b5133",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "37309667-c327-49e0-90c4-3373c3d4bc59",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "3adfb512-b8df-4e4a-b6f4-a372fba4fb13",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "47a5aef7-b19f-4cf3-b2d5-754ab9569a47"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "47a5aef7-b19f-4cf3-b2d5-754ab9569a47",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:11:12.611 04:06:21 blockdev_general -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:11:12.611 04:06:21 blockdev_general -- bdev/blockdev.sh@751 -- # hello_world_bdev=Malloc0 00:11:12.611 04:06:21 blockdev_general -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:11:12.611 04:06:21 blockdev_general -- bdev/blockdev.sh@753 -- # killprocess 2581184 00:11:12.611 04:06:21 blockdev_general -- common/autotest_common.sh@948 -- # '[' -z 2581184 ']' 00:11:12.611 04:06:21 blockdev_general -- common/autotest_common.sh@952 -- # kill -0 2581184 00:11:12.611 04:06:21 blockdev_general -- common/autotest_common.sh@953 -- # uname 00:11:12.611 04:06:21 blockdev_general -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:12.611 04:06:21 blockdev_general -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2581184 00:11:12.611 04:06:21 blockdev_general -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:12.611 04:06:21 blockdev_general -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:12.611 04:06:21 blockdev_general -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2581184' 00:11:12.611 killing process with pid 2581184 00:11:12.611 04:06:21 blockdev_general -- common/autotest_common.sh@967 -- # kill 2581184 00:11:12.611 04:06:21 blockdev_general -- common/autotest_common.sh@972 -- # wait 2581184 00:11:17.881 04:06:25 blockdev_general -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:17.881 04:06:25 blockdev_general -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:17.881 04:06:25 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:17.881 04:06:25 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.881 04:06:25 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:17.881 ************************************ 00:11:17.881 START TEST bdev_hello_world 00:11:17.881 ************************************ 00:11:17.881 04:06:26 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:17.881 [2024-07-23 04:06:26.114779] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:17.881 [2024-07-23 04:06:26.114891] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2582641 ] 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:17.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:17.881 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:17.881 [2024-07-23 04:06:26.342068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.881 [2024-07-23 04:06:26.626553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.449 [2024-07-23 04:06:27.222635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:18.449 [2024-07-23 04:06:27.222714] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:18.449 [2024-07-23 04:06:27.222738] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:18.449 [2024-07-23 04:06:27.230612] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:18.449 [2024-07-23 04:06:27.230657] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:18.708 [2024-07-23 04:06:27.238612] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:18.708 [2024-07-23 04:06:27.238656] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:18.984 [2024-07-23 04:06:27.495772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:18.984 [2024-07-23 04:06:27.495837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.984 [2024-07-23 04:06:27.495859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:11:18.984 [2024-07-23 04:06:27.495875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.984 [2024-07-23 04:06:27.498594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.984 [2024-07-23 04:06:27.498631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:19.243 [2024-07-23 04:06:27.950268] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:19.243 [2024-07-23 04:06:27.950356] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:11:19.243 [2024-07-23 04:06:27.950435] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:19.243 [2024-07-23 04:06:27.950540] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:19.243 [2024-07-23 04:06:27.950651] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:19.243 [2024-07-23 04:06:27.950695] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:19.243 [2024-07-23 04:06:27.950796] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:19.243 00:11:19.243 [2024-07-23 04:06:27.950857] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:22.531 00:11:22.531 real 0m4.941s 00:11:22.531 user 0m4.386s 00:11:22.531 sys 0m0.489s 00:11:22.531 04:06:30 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:22.531 04:06:30 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:22.531 ************************************ 00:11:22.531 END TEST bdev_hello_world 00:11:22.531 ************************************ 00:11:22.531 04:06:30 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:11:22.531 04:06:30 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:11:22.531 04:06:30 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:22.531 04:06:30 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.531 04:06:30 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:22.531 ************************************ 00:11:22.531 START TEST bdev_bounds 00:11:22.531 ************************************ 00:11:22.531 04:06:31 blockdev_general.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:11:22.531 04:06:31 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=2583456 00:11:22.531 04:06:31 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:22.531 04:06:31 blockdev_general.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:11:22.531 04:06:31 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 2583456' 00:11:22.531 Process bdevio pid: 2583456 00:11:22.531 04:06:31 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 2583456 00:11:22.531 04:06:31 blockdev_general.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 2583456 ']' 00:11:22.531 04:06:31 blockdev_general.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.531 04:06:31 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:22.531 04:06:31 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.531 04:06:31 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:22.531 04:06:31 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:22.531 [2024-07-23 04:06:31.128505] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:22.531 [2024-07-23 04:06:31.128596] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2583456 ] 00:11:22.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.531 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:22.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.531 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:22.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.531 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:22.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.531 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:22.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.531 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:22.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.531 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:22.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.531 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:22.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.531 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:22.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.531 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:22.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.531 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:22.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.531 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:22.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.531 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:22.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.531 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:22.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.531 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:22.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.532 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:22.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.532 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:22.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.532 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:22.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.532 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:22.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.532 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:22.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.532 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:22.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.532 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:22.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.532 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:22.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.532 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:22.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.532 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:22.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.532 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:22.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.532 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:22.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.532 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:22.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.532 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:22.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.532 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:22.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.532 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:22.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.532 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:22.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.532 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:22.791 [2024-07-23 04:06:31.329716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:23.050 [2024-07-23 04:06:31.623216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.050 [2024-07-23 04:06:31.623298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.050 [2024-07-23 04:06:31.623301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.617 [2024-07-23 04:06:32.233946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:23.617 [2024-07-23 04:06:32.234016] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:23.617 [2024-07-23 04:06:32.234036] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:23.617 [2024-07-23 04:06:32.241961] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:23.617 [2024-07-23 04:06:32.242005] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:23.617 [2024-07-23 04:06:32.249952] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:23.618 [2024-07-23 04:06:32.249989] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:23.877 [2024-07-23 04:06:32.513964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:23.877 [2024-07-23 04:06:32.514029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.877 [2024-07-23 04:06:32.514051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:11:23.877 [2024-07-23 04:06:32.514067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.877 [2024-07-23 04:06:32.516868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.877 [2024-07-23 04:06:32.516903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:24.445 04:06:32 blockdev_general.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:24.445 04:06:32 blockdev_general.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:11:24.445 04:06:32 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:24.445 I/O targets: 00:11:24.445 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:11:24.445 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:11:24.445 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:11:24.445 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:11:24.445 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:11:24.445 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:11:24.445 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:11:24.445 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:11:24.445 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:11:24.445 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:11:24.445 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:11:24.445 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:11:24.445 raid0: 131072 blocks of 512 bytes (64 MiB) 00:11:24.445 concat0: 131072 blocks of 512 bytes (64 MiB) 00:11:24.445 raid1: 65536 blocks of 512 bytes (32 MiB) 00:11:24.445 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:11:24.445 00:11:24.445 00:11:24.445 CUnit - A unit testing framework for C - Version 2.1-3 00:11:24.445 http://cunit.sourceforge.net/ 00:11:24.445 00:11:24.445 00:11:24.445 Suite: bdevio tests on: AIO0 00:11:24.445 Test: blockdev write read block ...passed 00:11:24.445 Test: blockdev write zeroes read block ...passed 00:11:24.445 Test: blockdev write zeroes read no split ...passed 00:11:24.445 Test: blockdev write zeroes read split ...passed 00:11:24.445 Test: blockdev write zeroes read split partial ...passed 00:11:24.445 Test: blockdev reset ...passed 00:11:24.445 Test: blockdev write read 8 blocks ...passed 00:11:24.445 Test: blockdev write read size > 128k ...passed 00:11:24.445 Test: blockdev write read invalid size ...passed 00:11:24.445 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.445 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.445 Test: blockdev write read max offset ...passed 00:11:24.445 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.445 Test: blockdev writev readv 8 blocks ...passed 00:11:24.445 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.445 Test: blockdev writev readv block ...passed 00:11:24.445 Test: blockdev writev readv size > 128k ...passed 00:11:24.445 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.445 Test: blockdev comparev and writev ...passed 00:11:24.445 Test: blockdev nvme passthru rw ...passed 00:11:24.445 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.445 Test: blockdev nvme admin passthru ...passed 00:11:24.445 Test: blockdev copy ...passed 00:11:24.445 Suite: bdevio tests on: raid1 00:11:24.445 Test: blockdev write read block ...passed 00:11:24.445 Test: blockdev write zeroes read block ...passed 00:11:24.445 Test: blockdev write zeroes read no split ...passed 00:11:24.445 Test: blockdev write zeroes read split ...passed 00:11:24.709 Test: blockdev write zeroes read split partial ...passed 00:11:24.709 Test: blockdev reset ...passed 00:11:24.709 Test: blockdev write read 8 blocks ...passed 00:11:24.709 Test: blockdev write read size > 128k ...passed 00:11:24.709 Test: blockdev write read invalid size ...passed 00:11:24.709 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.709 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.709 Test: blockdev write read max offset ...passed 00:11:24.709 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.709 Test: blockdev writev readv 8 blocks ...passed 00:11:24.709 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.709 Test: blockdev writev readv block ...passed 00:11:24.709 Test: blockdev writev readv size > 128k ...passed 00:11:24.709 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.709 Test: blockdev comparev and writev ...passed 00:11:24.709 Test: blockdev nvme passthru rw ...passed 00:11:24.709 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.709 Test: blockdev nvme admin passthru ...passed 00:11:24.709 Test: blockdev copy ...passed 00:11:24.709 Suite: bdevio tests on: concat0 00:11:24.709 Test: blockdev write read block ...passed 00:11:24.709 Test: blockdev write zeroes read block ...passed 00:11:24.709 Test: blockdev write zeroes read no split ...passed 00:11:24.709 Test: blockdev write zeroes read split ...passed 00:11:24.709 Test: blockdev write zeroes read split partial ...passed 00:11:24.709 Test: blockdev reset ...passed 00:11:24.709 Test: blockdev write read 8 blocks ...passed 00:11:24.709 Test: blockdev write read size > 128k ...passed 00:11:24.709 Test: blockdev write read invalid size ...passed 00:11:24.709 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.709 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.709 Test: blockdev write read max offset ...passed 00:11:24.709 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.709 Test: blockdev writev readv 8 blocks ...passed 00:11:24.709 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.709 Test: blockdev writev readv block ...passed 00:11:24.709 Test: blockdev writev readv size > 128k ...passed 00:11:24.709 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.709 Test: blockdev comparev and writev ...passed 00:11:24.709 Test: blockdev nvme passthru rw ...passed 00:11:24.709 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.709 Test: blockdev nvme admin passthru ...passed 00:11:24.709 Test: blockdev copy ...passed 00:11:24.709 Suite: bdevio tests on: raid0 00:11:24.709 Test: blockdev write read block ...passed 00:11:24.709 Test: blockdev write zeroes read block ...passed 00:11:24.709 Test: blockdev write zeroes read no split ...passed 00:11:24.709 Test: blockdev write zeroes read split ...passed 00:11:24.709 Test: blockdev write zeroes read split partial ...passed 00:11:24.709 Test: blockdev reset ...passed 00:11:24.709 Test: blockdev write read 8 blocks ...passed 00:11:24.709 Test: blockdev write read size > 128k ...passed 00:11:24.709 Test: blockdev write read invalid size ...passed 00:11:24.709 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.709 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.709 Test: blockdev write read max offset ...passed 00:11:24.709 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.709 Test: blockdev writev readv 8 blocks ...passed 00:11:24.709 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.709 Test: blockdev writev readv block ...passed 00:11:24.709 Test: blockdev writev readv size > 128k ...passed 00:11:24.709 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.709 Test: blockdev comparev and writev ...passed 00:11:24.709 Test: blockdev nvme passthru rw ...passed 00:11:24.709 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.709 Test: blockdev nvme admin passthru ...passed 00:11:24.709 Test: blockdev copy ...passed 00:11:24.709 Suite: bdevio tests on: TestPT 00:11:24.709 Test: blockdev write read block ...passed 00:11:24.710 Test: blockdev write zeroes read block ...passed 00:11:24.710 Test: blockdev write zeroes read no split ...passed 00:11:24.710 Test: blockdev write zeroes read split ...passed 00:11:24.971 Test: blockdev write zeroes read split partial ...passed 00:11:24.971 Test: blockdev reset ...passed 00:11:24.971 Test: blockdev write read 8 blocks ...passed 00:11:24.971 Test: blockdev write read size > 128k ...passed 00:11:24.971 Test: blockdev write read invalid size ...passed 00:11:24.971 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.971 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.971 Test: blockdev write read max offset ...passed 00:11:24.971 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.971 Test: blockdev writev readv 8 blocks ...passed 00:11:24.971 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.971 Test: blockdev writev readv block ...passed 00:11:24.971 Test: blockdev writev readv size > 128k ...passed 00:11:24.971 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.971 Test: blockdev comparev and writev ...passed 00:11:24.971 Test: blockdev nvme passthru rw ...passed 00:11:24.971 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.971 Test: blockdev nvme admin passthru ...passed 00:11:24.971 Test: blockdev copy ...passed 00:11:24.971 Suite: bdevio tests on: Malloc2p7 00:11:24.971 Test: blockdev write read block ...passed 00:11:24.971 Test: blockdev write zeroes read block ...passed 00:11:24.971 Test: blockdev write zeroes read no split ...passed 00:11:24.971 Test: blockdev write zeroes read split ...passed 00:11:24.971 Test: blockdev write zeroes read split partial ...passed 00:11:24.971 Test: blockdev reset ...passed 00:11:24.972 Test: blockdev write read 8 blocks ...passed 00:11:24.972 Test: blockdev write read size > 128k ...passed 00:11:24.972 Test: blockdev write read invalid size ...passed 00:11:24.972 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.972 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.972 Test: blockdev write read max offset ...passed 00:11:24.972 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.972 Test: blockdev writev readv 8 blocks ...passed 00:11:24.972 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.972 Test: blockdev writev readv block ...passed 00:11:24.972 Test: blockdev writev readv size > 128k ...passed 00:11:24.972 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.972 Test: blockdev comparev and writev ...passed 00:11:24.972 Test: blockdev nvme passthru rw ...passed 00:11:24.972 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.972 Test: blockdev nvme admin passthru ...passed 00:11:24.972 Test: blockdev copy ...passed 00:11:24.972 Suite: bdevio tests on: Malloc2p6 00:11:24.972 Test: blockdev write read block ...passed 00:11:24.972 Test: blockdev write zeroes read block ...passed 00:11:24.972 Test: blockdev write zeroes read no split ...passed 00:11:24.972 Test: blockdev write zeroes read split ...passed 00:11:24.972 Test: blockdev write zeroes read split partial ...passed 00:11:24.972 Test: blockdev reset ...passed 00:11:24.972 Test: blockdev write read 8 blocks ...passed 00:11:24.972 Test: blockdev write read size > 128k ...passed 00:11:24.972 Test: blockdev write read invalid size ...passed 00:11:24.972 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:24.972 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:24.972 Test: blockdev write read max offset ...passed 00:11:24.972 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:24.972 Test: blockdev writev readv 8 blocks ...passed 00:11:24.972 Test: blockdev writev readv 30 x 1block ...passed 00:11:24.972 Test: blockdev writev readv block ...passed 00:11:24.972 Test: blockdev writev readv size > 128k ...passed 00:11:24.972 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:24.972 Test: blockdev comparev and writev ...passed 00:11:24.972 Test: blockdev nvme passthru rw ...passed 00:11:24.972 Test: blockdev nvme passthru vendor specific ...passed 00:11:24.972 Test: blockdev nvme admin passthru ...passed 00:11:24.972 Test: blockdev copy ...passed 00:11:24.972 Suite: bdevio tests on: Malloc2p5 00:11:24.972 Test: blockdev write read block ...passed 00:11:24.972 Test: blockdev write zeroes read block ...passed 00:11:24.972 Test: blockdev write zeroes read no split ...passed 00:11:24.972 Test: blockdev write zeroes read split ...passed 00:11:24.972 Test: blockdev write zeroes read split partial ...passed 00:11:24.972 Test: blockdev reset ...passed 00:11:24.972 Test: blockdev write read 8 blocks ...passed 00:11:24.972 Test: blockdev write read size > 128k ...passed 00:11:25.231 Test: blockdev write read invalid size ...passed 00:11:25.231 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:25.231 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:25.231 Test: blockdev write read max offset ...passed 00:11:25.231 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:25.231 Test: blockdev writev readv 8 blocks ...passed 00:11:25.231 Test: blockdev writev readv 30 x 1block ...passed 00:11:25.231 Test: blockdev writev readv block ...passed 00:11:25.231 Test: blockdev writev readv size > 128k ...passed 00:11:25.231 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:25.231 Test: blockdev comparev and writev ...passed 00:11:25.231 Test: blockdev nvme passthru rw ...passed 00:11:25.231 Test: blockdev nvme passthru vendor specific ...passed 00:11:25.231 Test: blockdev nvme admin passthru ...passed 00:11:25.231 Test: blockdev copy ...passed 00:11:25.231 Suite: bdevio tests on: Malloc2p4 00:11:25.231 Test: blockdev write read block ...passed 00:11:25.231 Test: blockdev write zeroes read block ...passed 00:11:25.231 Test: blockdev write zeroes read no split ...passed 00:11:25.231 Test: blockdev write zeroes read split ...passed 00:11:25.231 Test: blockdev write zeroes read split partial ...passed 00:11:25.231 Test: blockdev reset ...passed 00:11:25.231 Test: blockdev write read 8 blocks ...passed 00:11:25.231 Test: blockdev write read size > 128k ...passed 00:11:25.231 Test: blockdev write read invalid size ...passed 00:11:25.231 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:25.231 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:25.231 Test: blockdev write read max offset ...passed 00:11:25.231 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:25.231 Test: blockdev writev readv 8 blocks ...passed 00:11:25.231 Test: blockdev writev readv 30 x 1block ...passed 00:11:25.231 Test: blockdev writev readv block ...passed 00:11:25.231 Test: blockdev writev readv size > 128k ...passed 00:11:25.231 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:25.231 Test: blockdev comparev and writev ...passed 00:11:25.231 Test: blockdev nvme passthru rw ...passed 00:11:25.231 Test: blockdev nvme passthru vendor specific ...passed 00:11:25.231 Test: blockdev nvme admin passthru ...passed 00:11:25.231 Test: blockdev copy ...passed 00:11:25.231 Suite: bdevio tests on: Malloc2p3 00:11:25.231 Test: blockdev write read block ...passed 00:11:25.231 Test: blockdev write zeroes read block ...passed 00:11:25.231 Test: blockdev write zeroes read no split ...passed 00:11:25.231 Test: blockdev write zeroes read split ...passed 00:11:25.231 Test: blockdev write zeroes read split partial ...passed 00:11:25.231 Test: blockdev reset ...passed 00:11:25.231 Test: blockdev write read 8 blocks ...passed 00:11:25.231 Test: blockdev write read size > 128k ...passed 00:11:25.231 Test: blockdev write read invalid size ...passed 00:11:25.231 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:25.231 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:25.231 Test: blockdev write read max offset ...passed 00:11:25.231 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:25.231 Test: blockdev writev readv 8 blocks ...passed 00:11:25.231 Test: blockdev writev readv 30 x 1block ...passed 00:11:25.231 Test: blockdev writev readv block ...passed 00:11:25.231 Test: blockdev writev readv size > 128k ...passed 00:11:25.231 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:25.231 Test: blockdev comparev and writev ...passed 00:11:25.231 Test: blockdev nvme passthru rw ...passed 00:11:25.231 Test: blockdev nvme passthru vendor specific ...passed 00:11:25.232 Test: blockdev nvme admin passthru ...passed 00:11:25.232 Test: blockdev copy ...passed 00:11:25.232 Suite: bdevio tests on: Malloc2p2 00:11:25.232 Test: blockdev write read block ...passed 00:11:25.232 Test: blockdev write zeroes read block ...passed 00:11:25.232 Test: blockdev write zeroes read no split ...passed 00:11:25.232 Test: blockdev write zeroes read split ...passed 00:11:25.232 Test: blockdev write zeroes read split partial ...passed 00:11:25.232 Test: blockdev reset ...passed 00:11:25.232 Test: blockdev write read 8 blocks ...passed 00:11:25.232 Test: blockdev write read size > 128k ...passed 00:11:25.232 Test: blockdev write read invalid size ...passed 00:11:25.232 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:25.232 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:25.232 Test: blockdev write read max offset ...passed 00:11:25.232 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:25.232 Test: blockdev writev readv 8 blocks ...passed 00:11:25.232 Test: blockdev writev readv 30 x 1block ...passed 00:11:25.232 Test: blockdev writev readv block ...passed 00:11:25.232 Test: blockdev writev readv size > 128k ...passed 00:11:25.232 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:25.232 Test: blockdev comparev and writev ...passed 00:11:25.232 Test: blockdev nvme passthru rw ...passed 00:11:25.232 Test: blockdev nvme passthru vendor specific ...passed 00:11:25.232 Test: blockdev nvme admin passthru ...passed 00:11:25.232 Test: blockdev copy ...passed 00:11:25.232 Suite: bdevio tests on: Malloc2p1 00:11:25.232 Test: blockdev write read block ...passed 00:11:25.232 Test: blockdev write zeroes read block ...passed 00:11:25.232 Test: blockdev write zeroes read no split ...passed 00:11:25.491 Test: blockdev write zeroes read split ...passed 00:11:25.491 Test: blockdev write zeroes read split partial ...passed 00:11:25.491 Test: blockdev reset ...passed 00:11:25.491 Test: blockdev write read 8 blocks ...passed 00:11:25.491 Test: blockdev write read size > 128k ...passed 00:11:25.491 Test: blockdev write read invalid size ...passed 00:11:25.491 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:25.491 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:25.491 Test: blockdev write read max offset ...passed 00:11:25.491 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:25.491 Test: blockdev writev readv 8 blocks ...passed 00:11:25.491 Test: blockdev writev readv 30 x 1block ...passed 00:11:25.491 Test: blockdev writev readv block ...passed 00:11:25.491 Test: blockdev writev readv size > 128k ...passed 00:11:25.491 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:25.491 Test: blockdev comparev and writev ...passed 00:11:25.491 Test: blockdev nvme passthru rw ...passed 00:11:25.491 Test: blockdev nvme passthru vendor specific ...passed 00:11:25.491 Test: blockdev nvme admin passthru ...passed 00:11:25.491 Test: blockdev copy ...passed 00:11:25.491 Suite: bdevio tests on: Malloc2p0 00:11:25.491 Test: blockdev write read block ...passed 00:11:25.491 Test: blockdev write zeroes read block ...passed 00:11:25.491 Test: blockdev write zeroes read no split ...passed 00:11:25.491 Test: blockdev write zeroes read split ...passed 00:11:25.491 Test: blockdev write zeroes read split partial ...passed 00:11:25.491 Test: blockdev reset ...passed 00:11:25.491 Test: blockdev write read 8 blocks ...passed 00:11:25.491 Test: blockdev write read size > 128k ...passed 00:11:25.491 Test: blockdev write read invalid size ...passed 00:11:25.491 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:25.491 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:25.491 Test: blockdev write read max offset ...passed 00:11:25.491 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:25.491 Test: blockdev writev readv 8 blocks ...passed 00:11:25.491 Test: blockdev writev readv 30 x 1block ...passed 00:11:25.491 Test: blockdev writev readv block ...passed 00:11:25.491 Test: blockdev writev readv size > 128k ...passed 00:11:25.491 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:25.491 Test: blockdev comparev and writev ...passed 00:11:25.491 Test: blockdev nvme passthru rw ...passed 00:11:25.491 Test: blockdev nvme passthru vendor specific ...passed 00:11:25.491 Test: blockdev nvme admin passthru ...passed 00:11:25.491 Test: blockdev copy ...passed 00:11:25.491 Suite: bdevio tests on: Malloc1p1 00:11:25.491 Test: blockdev write read block ...passed 00:11:25.491 Test: blockdev write zeroes read block ...passed 00:11:25.491 Test: blockdev write zeroes read no split ...passed 00:11:25.491 Test: blockdev write zeroes read split ...passed 00:11:25.491 Test: blockdev write zeroes read split partial ...passed 00:11:25.491 Test: blockdev reset ...passed 00:11:25.492 Test: blockdev write read 8 blocks ...passed 00:11:25.492 Test: blockdev write read size > 128k ...passed 00:11:25.492 Test: blockdev write read invalid size ...passed 00:11:25.492 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:25.492 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:25.492 Test: blockdev write read max offset ...passed 00:11:25.492 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:25.492 Test: blockdev writev readv 8 blocks ...passed 00:11:25.492 Test: blockdev writev readv 30 x 1block ...passed 00:11:25.492 Test: blockdev writev readv block ...passed 00:11:25.492 Test: blockdev writev readv size > 128k ...passed 00:11:25.492 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:25.492 Test: blockdev comparev and writev ...passed 00:11:25.492 Test: blockdev nvme passthru rw ...passed 00:11:25.492 Test: blockdev nvme passthru vendor specific ...passed 00:11:25.492 Test: blockdev nvme admin passthru ...passed 00:11:25.492 Test: blockdev copy ...passed 00:11:25.492 Suite: bdevio tests on: Malloc1p0 00:11:25.492 Test: blockdev write read block ...passed 00:11:25.492 Test: blockdev write zeroes read block ...passed 00:11:25.492 Test: blockdev write zeroes read no split ...passed 00:11:25.751 Test: blockdev write zeroes read split ...passed 00:11:25.751 Test: blockdev write zeroes read split partial ...passed 00:11:25.751 Test: blockdev reset ...passed 00:11:25.751 Test: blockdev write read 8 blocks ...passed 00:11:25.751 Test: blockdev write read size > 128k ...passed 00:11:25.751 Test: blockdev write read invalid size ...passed 00:11:25.751 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:25.751 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:25.751 Test: blockdev write read max offset ...passed 00:11:25.751 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:25.751 Test: blockdev writev readv 8 blocks ...passed 00:11:25.751 Test: blockdev writev readv 30 x 1block ...passed 00:11:25.751 Test: blockdev writev readv block ...passed 00:11:25.751 Test: blockdev writev readv size > 128k ...passed 00:11:25.751 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:25.751 Test: blockdev comparev and writev ...passed 00:11:25.751 Test: blockdev nvme passthru rw ...passed 00:11:25.751 Test: blockdev nvme passthru vendor specific ...passed 00:11:25.751 Test: blockdev nvme admin passthru ...passed 00:11:25.751 Test: blockdev copy ...passed 00:11:25.751 Suite: bdevio tests on: Malloc0 00:11:25.751 Test: blockdev write read block ...passed 00:11:25.751 Test: blockdev write zeroes read block ...passed 00:11:25.751 Test: blockdev write zeroes read no split ...passed 00:11:25.751 Test: blockdev write zeroes read split ...passed 00:11:25.751 Test: blockdev write zeroes read split partial ...passed 00:11:25.751 Test: blockdev reset ...passed 00:11:25.751 Test: blockdev write read 8 blocks ...passed 00:11:25.751 Test: blockdev write read size > 128k ...passed 00:11:25.751 Test: blockdev write read invalid size ...passed 00:11:25.751 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:25.751 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:25.751 Test: blockdev write read max offset ...passed 00:11:25.751 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:25.751 Test: blockdev writev readv 8 blocks ...passed 00:11:25.751 Test: blockdev writev readv 30 x 1block ...passed 00:11:25.751 Test: blockdev writev readv block ...passed 00:11:25.751 Test: blockdev writev readv size > 128k ...passed 00:11:25.751 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:25.751 Test: blockdev comparev and writev ...passed 00:11:25.751 Test: blockdev nvme passthru rw ...passed 00:11:25.751 Test: blockdev nvme passthru vendor specific ...passed 00:11:25.751 Test: blockdev nvme admin passthru ...passed 00:11:25.751 Test: blockdev copy ...passed 00:11:25.751 00:11:25.751 Run Summary: Type Total Ran Passed Failed Inactive 00:11:25.751 suites 16 16 n/a 0 0 00:11:25.751 tests 368 368 368 0 0 00:11:25.751 asserts 2224 2224 2224 0 n/a 00:11:25.751 00:11:25.751 Elapsed time = 3.852 seconds 00:11:25.751 0 00:11:25.751 04:06:34 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 2583456 00:11:25.751 04:06:34 blockdev_general.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 2583456 ']' 00:11:25.751 04:06:34 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 2583456 00:11:25.751 04:06:34 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:11:25.751 04:06:34 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:25.751 04:06:34 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2583456 00:11:25.751 04:06:34 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:25.751 04:06:34 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:25.751 04:06:34 blockdev_general.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2583456' 00:11:25.751 killing process with pid 2583456 00:11:25.752 04:06:34 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # kill 2583456 00:11:25.752 04:06:34 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # wait 2583456 00:11:29.041 04:06:37 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:11:29.042 00:11:29.042 real 0m6.116s 00:11:29.042 user 0m15.746s 00:11:29.042 sys 0m0.662s 00:11:29.042 04:06:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:29.042 04:06:37 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:29.042 ************************************ 00:11:29.042 END TEST bdev_bounds 00:11:29.042 ************************************ 00:11:29.042 04:06:37 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:11:29.042 04:06:37 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:29.042 04:06:37 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:29.042 04:06:37 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.042 04:06:37 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:29.042 ************************************ 00:11:29.042 START TEST bdev_nbd 00:11:29.042 ************************************ 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=16 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=16 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=2584540 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 2584540 /var/tmp/spdk-nbd.sock 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 2584540 ']' 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:29.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:29.042 04:06:37 blockdev_general.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:11:29.042 [2024-07-23 04:06:37.339828] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:29.042 [2024-07-23 04:06:37.339941] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:29.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:29.042 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:29.042 [2024-07-23 04:06:37.566469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.302 [2024-07-23 04:06:37.861156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.955 [2024-07-23 04:06:38.444007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:29.955 [2024-07-23 04:06:38.444067] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:29.955 [2024-07-23 04:06:38.444086] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:29.955 [2024-07-23 04:06:38.451973] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:29.955 [2024-07-23 04:06:38.452015] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:29.955 [2024-07-23 04:06:38.459982] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:29.955 [2024-07-23 04:06:38.460020] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:29.955 [2024-07-23 04:06:38.703267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:29.955 [2024-07-23 04:06:38.703328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.955 [2024-07-23 04:06:38.703350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:11:29.955 [2024-07-23 04:06:38.703365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.955 [2024-07-23 04:06:38.706106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.955 [2024-07-23 04:06:38.706148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:30.523 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:30.523 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:11:30.523 04:06:39 blockdev_general.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:30.523 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:30.523 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:30.523 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:30.523 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:30.523 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:30.523 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:30.523 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:30.523 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:11:30.523 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:30.523 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:30.523 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:30.523 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:11:30.782 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:30.782 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:30.782 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:30.782 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:30.782 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:30.782 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:30.782 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:30.782 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:30.782 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:30.782 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:30.782 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:30.782 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:30.782 1+0 records in 00:11:30.782 1+0 records out 00:11:30.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264483 s, 15.5 MB/s 00:11:30.782 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:30.782 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:30.782 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:30.782 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:30.782 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:30.782 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:30.782 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:30.782 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:11:31.041 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:31.041 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:31.041 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:31.041 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:31.041 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:31.041 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:31.041 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:31.041 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:31.041 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:31.041 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:31.041 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:31.041 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.041 1+0 records in 00:11:31.041 1+0 records out 00:11:31.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030463 s, 13.4 MB/s 00:11:31.041 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:31.041 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:31.041 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:31.041 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:31.041 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:31.041 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:31.041 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:31.041 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:11:31.300 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:31.300 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:31.300 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:31.300 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:11:31.300 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:31.300 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:31.300 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:31.300 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:11:31.300 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:31.300 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:31.300 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:31.300 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.300 1+0 records in 00:11:31.300 1+0 records out 00:11:31.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260472 s, 15.7 MB/s 00:11:31.300 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:31.300 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:31.300 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:31.300 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:31.300 04:06:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:31.300 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:31.300 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:31.300 04:06:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:11:31.559 04:06:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:31.559 04:06:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:31.559 04:06:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:31.559 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:11:31.559 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:31.559 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:31.559 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:31.560 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:11:31.560 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:31.560 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:31.560 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:31.560 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.560 1+0 records in 00:11:31.560 1+0 records out 00:11:31.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042118 s, 9.7 MB/s 00:11:31.560 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:31.560 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:31.560 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:31.560 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:31.560 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:31.560 04:06:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:31.560 04:06:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:31.560 04:06:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:11:31.819 04:06:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:31.819 04:06:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:31.819 04:06:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:31.819 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:11:31.819 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:31.819 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:31.819 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:31.819 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:11:31.819 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:31.819 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:31.819 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:31.819 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.819 1+0 records in 00:11:31.819 1+0 records out 00:11:31.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328618 s, 12.5 MB/s 00:11:31.819 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:31.819 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:31.819 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:31.819 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:31.819 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:31.819 04:06:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:31.819 04:06:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:31.819 04:06:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:11:32.078 04:06:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:32.078 04:06:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:32.078 04:06:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:32.078 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:11:32.078 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:32.078 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:32.078 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:32.078 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:11:32.078 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:32.078 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:32.078 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:32.078 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:32.078 1+0 records in 00:11:32.078 1+0 records out 00:11:32.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338242 s, 12.1 MB/s 00:11:32.078 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:32.078 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:32.078 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:32.078 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:32.078 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:32.079 04:06:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:32.079 04:06:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:32.079 04:06:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:11:32.338 04:06:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:32.338 04:06:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:32.338 04:06:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:32.338 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:11:32.338 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:32.338 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:32.338 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:32.338 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:11:32.338 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:32.338 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:32.338 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:32.338 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:32.338 1+0 records in 00:11:32.338 1+0 records out 00:11:32.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433138 s, 9.5 MB/s 00:11:32.338 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:32.338 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:32.338 04:06:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:32.338 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:32.338 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:32.338 04:06:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:32.338 04:06:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:32.338 04:06:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:11:32.597 04:06:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:11:32.597 04:06:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:11:32.597 04:06:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:11:32.597 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:11:32.597 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:32.597 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:32.597 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:32.597 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:11:32.597 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:32.597 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:32.597 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:32.597 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:32.597 1+0 records in 00:11:32.597 1+0 records out 00:11:32.597 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518481 s, 7.9 MB/s 00:11:32.597 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:32.597 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:32.597 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:32.597 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:32.597 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:32.597 04:06:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:32.597 04:06:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:32.597 04:06:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:11:32.856 04:06:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:11:32.856 04:06:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:11:32.856 04:06:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:11:32.856 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:11:32.856 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:32.856 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:32.856 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:32.856 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:11:32.856 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:32.856 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:32.856 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:32.856 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:32.856 1+0 records in 00:11:32.856 1+0 records out 00:11:32.856 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594043 s, 6.9 MB/s 00:11:32.856 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:32.856 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:32.856 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:32.856 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:32.856 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:32.856 04:06:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:32.856 04:06:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:32.856 04:06:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:11:33.115 04:06:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:11:33.115 04:06:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:11:33.115 04:06:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:11:33.115 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:11:33.115 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:33.115 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:33.115 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:33.115 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:11:33.115 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:33.115 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:33.115 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:33.115 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:33.115 1+0 records in 00:11:33.115 1+0 records out 00:11:33.115 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434744 s, 9.4 MB/s 00:11:33.115 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:33.115 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:33.115 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:33.115 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:33.115 04:06:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:33.115 04:06:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:33.115 04:06:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:33.115 04:06:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:11:33.374 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:11:33.374 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:11:33.374 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:11:33.374 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:11:33.374 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:33.374 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:33.374 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:33.374 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:11:33.374 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:33.374 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:33.374 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:33.374 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:33.374 1+0 records in 00:11:33.374 1+0 records out 00:11:33.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584192 s, 7.0 MB/s 00:11:33.374 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:33.374 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:33.374 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:33.374 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:33.374 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:33.374 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:33.374 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:33.374 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:11:33.633 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:11:33.633 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:11:33.633 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:11:33.633 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:11:33.633 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:33.633 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:33.633 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:33.633 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:11:33.633 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:33.633 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:33.633 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:33.633 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:33.633 1+0 records in 00:11:33.633 1+0 records out 00:11:33.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508984 s, 8.0 MB/s 00:11:33.633 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:33.633 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:33.633 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:33.633 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:33.633 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:33.633 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:33.633 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:33.633 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:11:33.892 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:11:33.892 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:11:33.892 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:11:33.892 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:11:33.892 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:33.892 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:33.892 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:33.892 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:11:33.892 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:33.892 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:33.892 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:33.892 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:33.892 1+0 records in 00:11:33.892 1+0 records out 00:11:33.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456364 s, 9.0 MB/s 00:11:33.892 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:33.892 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:33.892 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:33.892 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:33.892 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:33.892 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:33.892 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:33.892 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:11:34.151 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:11:34.151 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:11:34.151 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:11:34.151 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:11:34.151 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:34.151 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:34.151 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:34.151 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:11:34.151 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:34.151 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:34.151 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:34.151 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:34.151 1+0 records in 00:11:34.151 1+0 records out 00:11:34.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055186 s, 7.4 MB/s 00:11:34.151 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:34.151 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:34.151 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:34.151 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:34.151 04:06:42 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:34.151 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:34.151 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:34.151 04:06:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:11:34.411 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:11:34.411 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:11:34.411 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:11:34.411 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:11:34.411 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:34.411 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:34.411 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:34.411 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:11:34.411 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:34.411 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:34.411 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:34.411 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:34.411 1+0 records in 00:11:34.411 1+0 records out 00:11:34.411 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000805895 s, 5.1 MB/s 00:11:34.411 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:34.411 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:34.411 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:34.411 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:34.411 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:34.411 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:34.411 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:34.411 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:11:34.679 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:11:34.679 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:11:34.679 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:11:34.679 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:11:34.679 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:34.679 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:34.679 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:34.679 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:11:34.679 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:34.679 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:34.679 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:34.679 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:34.679 1+0 records in 00:11:34.679 1+0 records out 00:11:34.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736578 s, 5.6 MB/s 00:11:34.679 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:34.679 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:34.679 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:34.679 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:34.679 04:06:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:34.679 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:34.679 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:34.679 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:34.938 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd0", 00:11:34.938 "bdev_name": "Malloc0" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd1", 00:11:34.938 "bdev_name": "Malloc1p0" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd2", 00:11:34.938 "bdev_name": "Malloc1p1" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd3", 00:11:34.938 "bdev_name": "Malloc2p0" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd4", 00:11:34.938 "bdev_name": "Malloc2p1" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd5", 00:11:34.938 "bdev_name": "Malloc2p2" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd6", 00:11:34.938 "bdev_name": "Malloc2p3" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd7", 00:11:34.938 "bdev_name": "Malloc2p4" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd8", 00:11:34.938 "bdev_name": "Malloc2p5" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd9", 00:11:34.938 "bdev_name": "Malloc2p6" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd10", 00:11:34.938 "bdev_name": "Malloc2p7" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd11", 00:11:34.938 "bdev_name": "TestPT" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd12", 00:11:34.938 "bdev_name": "raid0" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd13", 00:11:34.938 "bdev_name": "concat0" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd14", 00:11:34.938 "bdev_name": "raid1" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd15", 00:11:34.938 "bdev_name": "AIO0" 00:11:34.938 } 00:11:34.938 ]' 00:11:34.938 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:34.938 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd0", 00:11:34.938 "bdev_name": "Malloc0" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd1", 00:11:34.938 "bdev_name": "Malloc1p0" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd2", 00:11:34.938 "bdev_name": "Malloc1p1" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd3", 00:11:34.938 "bdev_name": "Malloc2p0" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd4", 00:11:34.938 "bdev_name": "Malloc2p1" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd5", 00:11:34.938 "bdev_name": "Malloc2p2" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd6", 00:11:34.938 "bdev_name": "Malloc2p3" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd7", 00:11:34.938 "bdev_name": "Malloc2p4" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd8", 00:11:34.938 "bdev_name": "Malloc2p5" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd9", 00:11:34.938 "bdev_name": "Malloc2p6" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd10", 00:11:34.938 "bdev_name": "Malloc2p7" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd11", 00:11:34.938 "bdev_name": "TestPT" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd12", 00:11:34.938 "bdev_name": "raid0" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd13", 00:11:34.938 "bdev_name": "concat0" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd14", 00:11:34.938 "bdev_name": "raid1" 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "nbd_device": "/dev/nbd15", 00:11:34.938 "bdev_name": "AIO0" 00:11:34.938 } 00:11:34.938 ]' 00:11:34.938 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:34.938 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:11:34.938 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:34.938 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:11:34.938 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:34.938 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:34.938 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.938 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:35.197 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:35.197 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:35.197 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:35.197 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.197 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.197 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:35.197 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.197 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.197 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.197 04:06:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:35.455 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:35.455 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:35.455 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:35.455 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.455 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.455 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:35.455 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.455 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.456 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.456 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:35.715 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:35.715 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:35.715 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:35.715 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.715 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.715 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:35.715 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.715 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.715 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.715 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:35.974 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:35.974 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:35.974 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:35.974 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.974 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.974 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:35.974 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.974 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.974 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.974 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:36.232 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:36.232 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:36.232 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:36.232 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.232 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.232 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:36.232 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:36.232 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.232 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.232 04:06:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:36.491 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:36.491 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:36.491 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:36.491 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.491 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.491 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:36.491 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:36.491 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.491 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.491 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:36.749 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:36.749 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:36.749 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:36.749 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.749 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.749 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:36.749 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:36.749 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.749 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.749 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:37.008 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:37.008 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:37.008 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:37.008 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:37.008 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:37.008 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:37.008 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:37.008 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:37.008 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:37.008 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:37.267 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:37.267 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:37.267 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:37.267 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:37.267 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:37.267 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:37.267 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:37.267 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:37.267 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:37.267 04:06:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:37.526 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:37.526 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:37.526 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:37.526 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:37.526 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:37.526 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:37.527 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:37.527 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:37.527 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:37.527 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:37.785 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:37.785 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:37.785 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:37.785 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:37.785 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:37.785 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:37.785 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:37.785 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:37.785 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:37.785 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:38.044 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:38.044 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:38.044 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:38.044 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:38.044 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:38.044 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:38.044 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:38.044 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:38.044 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:38.044 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:38.303 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:38.303 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:38.303 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:38.303 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:38.303 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:38.303 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:38.303 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:38.303 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:38.303 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:38.303 04:06:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:38.561 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:38.561 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:38.561 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:38.561 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:38.561 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:38.561 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:38.561 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:38.561 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:38.561 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:38.562 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:38.820 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:38.821 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:38.821 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:38.821 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:38.821 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:38.821 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:38.821 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:38.821 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:38.821 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:38.821 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:39.080 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:39.080 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:39.080 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:39.080 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:39.080 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:39.080 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:39.080 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:39.080 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:39.080 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:39.080 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:39.080 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:39.339 04:06:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:39.598 /dev/nbd0 00:11:39.598 04:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:39.598 04:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:39.598 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:39.598 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:39.598 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:39.598 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:39.598 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:39.598 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:39.598 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:39.598 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:39.598 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:39.598 1+0 records in 00:11:39.598 1+0 records out 00:11:39.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026332 s, 15.6 MB/s 00:11:39.598 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:39.598 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:39.598 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:39.598 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:39.598 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:39.598 04:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:39.598 04:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:39.598 04:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:11:39.856 /dev/nbd1 00:11:39.856 04:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:39.856 04:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:39.856 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:39.856 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:39.856 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:39.856 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:39.856 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:39.856 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:39.856 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:39.856 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:39.856 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:39.856 1+0 records in 00:11:39.856 1+0 records out 00:11:39.856 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291227 s, 14.1 MB/s 00:11:39.856 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:39.856 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:39.856 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:39.856 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:39.856 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:39.856 04:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:39.856 04:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:39.856 04:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:11:40.115 /dev/nbd10 00:11:40.115 04:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:40.115 04:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:40.115 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:11:40.115 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:40.115 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:40.115 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:40.115 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:11:40.115 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:40.115 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:40.115 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:40.115 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:40.115 1+0 records in 00:11:40.115 1+0 records out 00:11:40.115 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311169 s, 13.2 MB/s 00:11:40.115 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:40.115 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:40.115 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:40.115 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:40.115 04:06:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:40.115 04:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:40.115 04:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:40.115 04:06:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:11:40.375 /dev/nbd11 00:11:40.375 04:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:40.375 04:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:40.375 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:11:40.375 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:40.375 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:40.375 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:40.375 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:11:40.375 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:40.375 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:40.375 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:40.375 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:40.375 1+0 records in 00:11:40.375 1+0 records out 00:11:40.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327203 s, 12.5 MB/s 00:11:40.375 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:40.375 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:40.375 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:40.375 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:40.375 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:40.375 04:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:40.375 04:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:40.375 04:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:11:40.635 /dev/nbd12 00:11:40.635 04:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:40.635 04:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:40.635 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:11:40.635 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:40.635 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:40.635 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:40.635 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:11:40.635 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:40.635 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:40.635 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:40.635 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:40.635 1+0 records in 00:11:40.635 1+0 records out 00:11:40.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360918 s, 11.3 MB/s 00:11:40.635 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:40.635 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:40.635 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:40.635 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:40.635 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:40.635 04:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:40.635 04:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:40.635 04:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:11:40.894 /dev/nbd13 00:11:40.894 04:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:40.894 04:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:40.894 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:11:40.894 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:40.894 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:40.894 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:40.894 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:11:40.894 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:40.894 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:40.894 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:40.894 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:40.894 1+0 records in 00:11:40.894 1+0 records out 00:11:40.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418951 s, 9.8 MB/s 00:11:40.894 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:40.894 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:40.894 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:40.894 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:40.894 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:40.894 04:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:40.894 04:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:40.894 04:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:11:41.153 /dev/nbd14 00:11:41.153 04:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:41.153 04:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:41.153 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:11:41.153 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:41.153 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:41.153 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:41.153 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:11:41.153 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:41.153 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:41.153 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:41.153 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:41.153 1+0 records in 00:11:41.153 1+0 records out 00:11:41.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477447 s, 8.6 MB/s 00:11:41.153 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:41.153 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:41.153 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:41.411 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:41.411 04:06:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:41.411 04:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:41.411 04:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:41.411 04:06:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:11:41.411 /dev/nbd15 00:11:41.411 04:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:11:41.669 04:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:11:41.669 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:11:41.669 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:41.669 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:41.669 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:41.669 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:11:41.669 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:41.669 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:41.669 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:41.669 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:41.669 1+0 records in 00:11:41.669 1+0 records out 00:11:41.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472361 s, 8.7 MB/s 00:11:41.669 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:41.669 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:41.669 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:41.669 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:41.669 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:41.669 04:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:41.669 04:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:41.669 04:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:11:41.669 /dev/nbd2 00:11:41.929 04:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:11:41.929 04:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:11:41.929 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:11:41.929 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:41.929 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:41.929 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:41.929 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:11:41.929 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:41.929 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:41.929 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:41.929 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:41.929 1+0 records in 00:11:41.929 1+0 records out 00:11:41.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561918 s, 7.3 MB/s 00:11:41.929 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:41.929 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:41.929 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:41.929 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:41.929 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:41.929 04:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:41.929 04:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:41.929 04:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:11:42.189 /dev/nbd3 00:11:42.189 04:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:11:42.189 04:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:11:42.189 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:11:42.189 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:42.189 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:42.189 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:42.189 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:11:42.189 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:42.189 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:42.189 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:42.189 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:42.189 1+0 records in 00:11:42.189 1+0 records out 00:11:42.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046646 s, 8.8 MB/s 00:11:42.189 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:42.189 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:42.189 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:42.189 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:42.189 04:06:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:42.189 04:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:42.189 04:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:42.189 04:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:11:42.448 /dev/nbd4 00:11:42.448 04:06:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:11:42.448 04:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:11:42.448 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:11:42.448 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:42.448 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:42.448 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:42.448 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:11:42.448 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:42.448 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:42.448 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:42.448 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:42.448 1+0 records in 00:11:42.448 1+0 records out 00:11:42.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593702 s, 6.9 MB/s 00:11:42.448 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:42.448 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:42.448 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:42.448 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:42.448 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:42.448 04:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:42.448 04:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:42.448 04:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:11:42.708 /dev/nbd5 00:11:42.708 04:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:11:42.708 04:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:11:42.708 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:11:42.708 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:42.708 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:42.708 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:42.708 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:11:42.708 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:42.708 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:42.708 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:42.708 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:42.708 1+0 records in 00:11:42.708 1+0 records out 00:11:42.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452656 s, 9.0 MB/s 00:11:42.708 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:42.708 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:42.708 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:42.708 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:42.708 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:42.708 04:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:42.708 04:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:42.708 04:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:11:42.967 /dev/nbd6 00:11:42.967 04:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:11:42.967 04:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:11:42.967 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:11:42.967 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:42.967 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:42.967 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:42.967 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:11:42.967 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:42.967 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:42.967 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:42.967 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:42.967 1+0 records in 00:11:42.967 1+0 records out 00:11:42.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589812 s, 6.9 MB/s 00:11:42.967 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:42.967 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:42.967 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:42.967 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:42.967 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:42.967 04:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:42.967 04:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:42.967 04:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:11:43.234 /dev/nbd7 00:11:43.234 04:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:11:43.234 04:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:11:43.234 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:11:43.234 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:43.234 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:43.234 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:43.234 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:11:43.234 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:43.234 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:43.234 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:43.234 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:43.234 1+0 records in 00:11:43.234 1+0 records out 00:11:43.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000691717 s, 5.9 MB/s 00:11:43.234 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:43.234 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:43.234 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:43.234 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:43.234 04:06:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:43.234 04:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:43.234 04:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:43.234 04:06:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:11:43.234 /dev/nbd8 00:11:43.494 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:11:43.494 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:11:43.494 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:11:43.494 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:43.494 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:43.494 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:43.494 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:11:43.494 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:43.494 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:43.494 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:43.494 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:43.494 1+0 records in 00:11:43.494 1+0 records out 00:11:43.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000740717 s, 5.5 MB/s 00:11:43.494 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:43.494 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:43.494 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:43.494 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:43.494 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:43.494 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:43.494 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:43.494 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:11:43.753 /dev/nbd9 00:11:43.753 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:11:43.753 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:11:43.753 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:11:43.753 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:43.753 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:43.753 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:43.754 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:11:43.754 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:43.754 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:43.754 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:43.754 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:43.754 1+0 records in 00:11:43.754 1+0 records out 00:11:43.754 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000649471 s, 6.3 MB/s 00:11:43.754 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:43.754 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:43.754 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:43.754 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:43.754 04:06:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:43.754 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:43.754 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:43.754 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:43.754 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:43.754 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:44.013 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:44.013 { 00:11:44.013 "nbd_device": "/dev/nbd0", 00:11:44.013 "bdev_name": "Malloc0" 00:11:44.013 }, 00:11:44.013 { 00:11:44.013 "nbd_device": "/dev/nbd1", 00:11:44.013 "bdev_name": "Malloc1p0" 00:11:44.013 }, 00:11:44.013 { 00:11:44.013 "nbd_device": "/dev/nbd10", 00:11:44.013 "bdev_name": "Malloc1p1" 00:11:44.013 }, 00:11:44.013 { 00:11:44.013 "nbd_device": "/dev/nbd11", 00:11:44.013 "bdev_name": "Malloc2p0" 00:11:44.013 }, 00:11:44.013 { 00:11:44.013 "nbd_device": "/dev/nbd12", 00:11:44.013 "bdev_name": "Malloc2p1" 00:11:44.013 }, 00:11:44.013 { 00:11:44.013 "nbd_device": "/dev/nbd13", 00:11:44.013 "bdev_name": "Malloc2p2" 00:11:44.013 }, 00:11:44.013 { 00:11:44.013 "nbd_device": "/dev/nbd14", 00:11:44.013 "bdev_name": "Malloc2p3" 00:11:44.013 }, 00:11:44.013 { 00:11:44.013 "nbd_device": "/dev/nbd15", 00:11:44.013 "bdev_name": "Malloc2p4" 00:11:44.013 }, 00:11:44.013 { 00:11:44.013 "nbd_device": "/dev/nbd2", 00:11:44.013 "bdev_name": "Malloc2p5" 00:11:44.013 }, 00:11:44.013 { 00:11:44.013 "nbd_device": "/dev/nbd3", 00:11:44.013 "bdev_name": "Malloc2p6" 00:11:44.013 }, 00:11:44.013 { 00:11:44.013 "nbd_device": "/dev/nbd4", 00:11:44.013 "bdev_name": "Malloc2p7" 00:11:44.013 }, 00:11:44.013 { 00:11:44.013 "nbd_device": "/dev/nbd5", 00:11:44.013 "bdev_name": "TestPT" 00:11:44.013 }, 00:11:44.013 { 00:11:44.013 "nbd_device": "/dev/nbd6", 00:11:44.013 "bdev_name": "raid0" 00:11:44.013 }, 00:11:44.013 { 00:11:44.013 "nbd_device": "/dev/nbd7", 00:11:44.013 "bdev_name": "concat0" 00:11:44.013 }, 00:11:44.013 { 00:11:44.013 "nbd_device": "/dev/nbd8", 00:11:44.013 "bdev_name": "raid1" 00:11:44.013 }, 00:11:44.013 { 00:11:44.013 "nbd_device": "/dev/nbd9", 00:11:44.013 "bdev_name": "AIO0" 00:11:44.013 } 00:11:44.013 ]' 00:11:44.013 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:44.014 { 00:11:44.014 "nbd_device": "/dev/nbd0", 00:11:44.014 "bdev_name": "Malloc0" 00:11:44.014 }, 00:11:44.014 { 00:11:44.014 "nbd_device": "/dev/nbd1", 00:11:44.014 "bdev_name": "Malloc1p0" 00:11:44.014 }, 00:11:44.014 { 00:11:44.014 "nbd_device": "/dev/nbd10", 00:11:44.014 "bdev_name": "Malloc1p1" 00:11:44.014 }, 00:11:44.014 { 00:11:44.014 "nbd_device": "/dev/nbd11", 00:11:44.014 "bdev_name": "Malloc2p0" 00:11:44.014 }, 00:11:44.014 { 00:11:44.014 "nbd_device": "/dev/nbd12", 00:11:44.014 "bdev_name": "Malloc2p1" 00:11:44.014 }, 00:11:44.014 { 00:11:44.014 "nbd_device": "/dev/nbd13", 00:11:44.014 "bdev_name": "Malloc2p2" 00:11:44.014 }, 00:11:44.014 { 00:11:44.014 "nbd_device": "/dev/nbd14", 00:11:44.014 "bdev_name": "Malloc2p3" 00:11:44.014 }, 00:11:44.014 { 00:11:44.014 "nbd_device": "/dev/nbd15", 00:11:44.014 "bdev_name": "Malloc2p4" 00:11:44.014 }, 00:11:44.014 { 00:11:44.014 "nbd_device": "/dev/nbd2", 00:11:44.014 "bdev_name": "Malloc2p5" 00:11:44.014 }, 00:11:44.014 { 00:11:44.014 "nbd_device": "/dev/nbd3", 00:11:44.014 "bdev_name": "Malloc2p6" 00:11:44.014 }, 00:11:44.014 { 00:11:44.014 "nbd_device": "/dev/nbd4", 00:11:44.014 "bdev_name": "Malloc2p7" 00:11:44.014 }, 00:11:44.014 { 00:11:44.014 "nbd_device": "/dev/nbd5", 00:11:44.014 "bdev_name": "TestPT" 00:11:44.014 }, 00:11:44.014 { 00:11:44.014 "nbd_device": "/dev/nbd6", 00:11:44.014 "bdev_name": "raid0" 00:11:44.014 }, 00:11:44.014 { 00:11:44.014 "nbd_device": "/dev/nbd7", 00:11:44.014 "bdev_name": "concat0" 00:11:44.014 }, 00:11:44.014 { 00:11:44.014 "nbd_device": "/dev/nbd8", 00:11:44.014 "bdev_name": "raid1" 00:11:44.014 }, 00:11:44.014 { 00:11:44.014 "nbd_device": "/dev/nbd9", 00:11:44.014 "bdev_name": "AIO0" 00:11:44.014 } 00:11:44.014 ]' 00:11:44.014 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:44.014 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:44.014 /dev/nbd1 00:11:44.014 /dev/nbd10 00:11:44.014 /dev/nbd11 00:11:44.014 /dev/nbd12 00:11:44.014 /dev/nbd13 00:11:44.014 /dev/nbd14 00:11:44.014 /dev/nbd15 00:11:44.014 /dev/nbd2 00:11:44.014 /dev/nbd3 00:11:44.014 /dev/nbd4 00:11:44.014 /dev/nbd5 00:11:44.014 /dev/nbd6 00:11:44.014 /dev/nbd7 00:11:44.014 /dev/nbd8 00:11:44.014 /dev/nbd9' 00:11:44.014 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:44.014 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:44.014 /dev/nbd1 00:11:44.014 /dev/nbd10 00:11:44.014 /dev/nbd11 00:11:44.014 /dev/nbd12 00:11:44.014 /dev/nbd13 00:11:44.014 /dev/nbd14 00:11:44.014 /dev/nbd15 00:11:44.014 /dev/nbd2 00:11:44.014 /dev/nbd3 00:11:44.014 /dev/nbd4 00:11:44.014 /dev/nbd5 00:11:44.014 /dev/nbd6 00:11:44.014 /dev/nbd7 00:11:44.014 /dev/nbd8 00:11:44.014 /dev/nbd9' 00:11:44.014 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:11:44.014 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:11:44.014 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:11:44.014 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:11:44.014 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:11:44.014 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:44.014 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:44.014 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:44.014 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:11:44.014 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:44.014 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:44.014 256+0 records in 00:11:44.014 256+0 records out 00:11:44.014 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113717 s, 92.2 MB/s 00:11:44.014 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:44.014 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:44.014 256+0 records in 00:11:44.014 256+0 records out 00:11:44.014 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147922 s, 7.1 MB/s 00:11:44.014 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:44.014 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:44.273 256+0 records in 00:11:44.273 256+0 records out 00:11:44.273 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159061 s, 6.6 MB/s 00:11:44.273 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:44.273 04:06:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:44.612 256+0 records in 00:11:44.612 256+0 records out 00:11:44.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136359 s, 7.7 MB/s 00:11:44.612 04:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:44.612 04:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:44.612 256+0 records in 00:11:44.612 256+0 records out 00:11:44.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1778 s, 5.9 MB/s 00:11:44.612 04:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:44.612 04:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:44.879 256+0 records in 00:11:44.879 256+0 records out 00:11:44.879 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138941 s, 7.5 MB/s 00:11:44.879 04:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:44.879 04:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:44.879 256+0 records in 00:11:44.879 256+0 records out 00:11:44.879 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.101582 s, 10.3 MB/s 00:11:44.879 04:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:44.879 04:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:44.879 256+0 records in 00:11:44.879 256+0 records out 00:11:44.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.115488 s, 9.1 MB/s 00:11:44.880 04:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:44.880 04:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:11:45.139 256+0 records in 00:11:45.139 256+0 records out 00:11:45.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.177446 s, 5.9 MB/s 00:11:45.139 04:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:45.139 04:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:11:45.139 256+0 records in 00:11:45.139 256+0 records out 00:11:45.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.108959 s, 9.6 MB/s 00:11:45.139 04:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:45.139 04:06:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:11:45.398 256+0 records in 00:11:45.398 256+0 records out 00:11:45.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.177468 s, 5.9 MB/s 00:11:45.398 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:45.398 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:11:45.657 256+0 records in 00:11:45.657 256+0 records out 00:11:45.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.112246 s, 9.3 MB/s 00:11:45.657 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:45.657 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:11:45.657 256+0 records in 00:11:45.657 256+0 records out 00:11:45.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.177054 s, 5.9 MB/s 00:11:45.657 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:45.657 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:11:45.916 256+0 records in 00:11:45.916 256+0 records out 00:11:45.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.096087 s, 10.9 MB/s 00:11:45.916 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:45.916 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:11:45.916 256+0 records in 00:11:45.916 256+0 records out 00:11:45.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180319 s, 5.8 MB/s 00:11:45.916 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:45.916 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:11:46.176 256+0 records in 00:11:46.176 256+0 records out 00:11:46.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.10286 s, 10.2 MB/s 00:11:46.176 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:46.176 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:11:46.436 256+0 records in 00:11:46.436 256+0 records out 00:11:46.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.17651 s, 5.9 MB/s 00:11:46.436 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:11:46.436 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:46.436 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:46.436 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:46.436 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:11:46.436 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:46.436 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:46.436 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:46.436 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:46.436 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:46.436 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:46.436 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:46.436 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:46.436 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:46.436 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:46.436 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:46.436 04:06:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd15 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd2 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd3 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd4 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd5 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd6 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd7 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd8 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd9 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:46.436 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:46.696 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:46.696 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:46.696 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:46.696 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.696 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.696 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:46.696 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:46.696 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.696 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:46.696 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:46.955 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:46.955 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:46.955 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:46.955 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.955 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.955 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:46.955 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:46.955 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.955 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:46.955 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:47.214 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:47.214 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:47.214 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:47.214 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:47.214 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:47.214 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:47.214 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:47.214 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:47.214 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:47.214 04:06:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:47.473 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:47.473 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:47.473 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:47.473 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:47.473 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:47.473 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:47.473 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:47.473 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:47.473 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:47.473 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:47.733 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:47.733 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:47.733 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:47.733 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:47.733 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:47.733 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:47.733 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:47.733 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:47.733 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:47.733 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:47.992 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:47.992 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:47.992 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:47.992 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:47.992 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:47.992 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:47.992 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:47.992 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:47.992 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:47.992 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:48.251 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:48.251 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:48.251 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:48.251 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:48.251 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:48.251 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:48.251 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:48.251 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:48.251 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:48.251 04:06:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:48.511 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:48.511 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:48.511 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:48.511 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:48.511 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:48.511 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:48.511 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:48.511 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:48.511 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:48.511 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:48.770 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:48.770 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:48.770 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:48.770 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:48.770 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:48.770 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:48.770 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:48.770 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:48.770 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:48.770 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:49.029 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:49.029 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:49.029 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:49.029 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.029 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.029 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:49.029 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:49.029 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:49.029 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.029 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:49.289 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:49.289 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:49.289 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:49.289 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.289 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.289 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:49.289 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:49.289 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:49.289 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.289 04:06:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:49.548 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:49.548 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:49.548 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:49.548 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.548 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.548 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:49.548 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:49.548 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:49.548 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.548 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:49.808 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:49.808 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:49.808 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:49.808 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.808 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.808 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:49.808 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:49.808 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:49.808 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.808 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:49.808 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:49.808 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:49.808 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:49.808 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.808 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.808 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:50.067 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:50.067 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.067 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.067 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:50.067 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:50.067 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:50.067 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:50.067 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.067 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.067 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:50.067 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:50.067 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.067 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.067 04:06:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:50.327 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:50.327 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:50.327 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:50.327 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.327 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.327 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:50.327 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:50.327 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.327 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:50.327 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:50.327 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:50.586 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:50.586 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:50.586 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:50.586 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:50.586 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:50.586 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:50.586 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:50.586 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:50.586 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:50.586 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:50.586 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:50.586 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:50.586 04:06:59 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:50.586 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:50.586 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:50.586 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:11:50.586 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:11:50.586 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:50.846 malloc_lvol_verify 00:11:50.846 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:51.105 8ffa21eb-256e-40b9-beac-f1ec815d84e5 00:11:51.105 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:51.365 2db5ab25-138f-41d0-b70d-06bd03690504 00:11:51.365 04:06:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:51.365 /dev/nbd0 00:11:51.365 04:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:11:51.365 mke2fs 1.46.5 (30-Dec-2021) 00:11:51.365 Discarding device blocks: 0/4096 done 00:11:51.365 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:51.365 00:11:51.365 Allocating group tables: 0/1 done 00:11:51.365 Writing inode tables: 0/1 done 00:11:51.365 Creating journal (1024 blocks): done 00:11:51.365 Writing superblocks and filesystem accounting information: 0/1 done 00:11:51.365 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 2584540 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 2584540 ']' 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 2584540 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:51.624 04:07:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2584540 00:11:51.883 04:07:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:51.883 04:07:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:51.883 04:07:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2584540' 00:11:51.883 killing process with pid 2584540 00:11:51.883 04:07:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@967 -- # kill 2584540 00:11:51.883 04:07:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@972 -- # wait 2584540 00:11:55.174 04:07:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:11:55.174 00:11:55.174 real 0m26.322s 00:11:55.174 user 0m31.783s 00:11:55.174 sys 0m13.104s 00:11:55.174 04:07:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:55.174 04:07:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:55.174 ************************************ 00:11:55.174 END TEST bdev_nbd 00:11:55.174 ************************************ 00:11:55.174 04:07:03 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:11:55.174 04:07:03 blockdev_general -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:11:55.174 04:07:03 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = nvme ']' 00:11:55.174 04:07:03 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = gpt ']' 00:11:55.174 04:07:03 blockdev_general -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:11:55.174 04:07:03 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:55.174 04:07:03 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:55.174 04:07:03 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:55.174 ************************************ 00:11:55.174 START TEST bdev_fio 00:11:55.174 ************************************ 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:11:55.174 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc0]' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc0 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p0]' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p0 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p1]' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p1 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p0]' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p0 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p1]' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p1 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p2]' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p2 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p3]' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p3 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p4]' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p4 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p5]' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p5 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p6]' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p6 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p7]' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p7 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_TestPT]' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=TestPT 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid0]' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid0 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_concat0]' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=concat0 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid1]' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid1 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_AIO0]' 00:11:55.174 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=AIO0 00:11:55.175 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:11:55.175 04:07:03 blockdev_general.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:55.175 04:07:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:11:55.175 04:07:03 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:55.175 04:07:03 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:11:55.175 ************************************ 00:11:55.175 START TEST bdev_fio_rw_verify 00:11:55.175 ************************************ 00:11:55.175 04:07:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:55.175 04:07:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:55.175 04:07:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:55.175 04:07:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:55.175 04:07:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:55.175 04:07:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:11:55.175 04:07:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:11:55.175 04:07:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:55.175 04:07:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:55.175 04:07:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:11:55.175 04:07:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:11:55.175 04:07:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:55.175 04:07:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:55.175 04:07:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:55.175 04:07:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:11:55.175 04:07:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:11:55.175 04:07:03 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:55.742 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.742 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.742 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.742 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.742 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.742 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.742 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.742 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.742 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.742 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.742 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.742 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.742 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.742 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.742 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.742 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.742 fio-3.35 00:11:55.742 Starting 16 threads 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:55.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.743 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:07.955 00:12:07.955 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=2590142: Tue Jul 23 04:07:16 2024 00:12:07.955 read: IOPS=84.2k, BW=329MiB/s (345MB/s)(3289MiB/10001msec) 00:12:07.955 slat (usec): min=3, max=446, avg=39.26, stdev=15.34 00:12:07.955 clat (usec): min=13, max=1280, avg=310.59, stdev=137.45 00:12:07.955 lat (usec): min=28, max=1370, avg=349.85, stdev=144.84 00:12:07.955 clat percentiles (usec): 00:12:07.955 | 50.000th=[ 306], 99.000th=[ 611], 99.900th=[ 775], 99.990th=[ 889], 00:12:07.955 | 99.999th=[ 1139] 00:12:07.955 write: IOPS=130k, BW=509MiB/s (534MB/s)(5029MiB/9880msec); 0 zone resets 00:12:07.955 slat (usec): min=9, max=461, avg=53.99, stdev=17.29 00:12:07.955 clat (usec): min=15, max=1550, avg=370.26, stdev=167.46 00:12:07.955 lat (usec): min=38, max=1671, avg=424.25, stdev=175.71 00:12:07.955 clat percentiles (usec): 00:12:07.955 | 50.000th=[ 355], 99.000th=[ 857], 99.900th=[ 979], 99.990th=[ 1074], 00:12:07.955 | 99.999th=[ 1336] 00:12:07.955 bw ( KiB/s): min=471672, max=577707, per=98.84%, avg=515169.00, stdev=1903.70, samples=304 00:12:07.955 iops : min=117920, max=144425, avg=128792.16, stdev=475.90, samples=304 00:12:07.955 lat (usec) : 20=0.01%, 50=0.27%, 100=3.11%, 250=28.34%, 500=50.77% 00:12:07.955 lat (usec) : 750=16.09%, 1000=1.38% 00:12:07.955 lat (msec) : 2=0.04% 00:12:07.955 cpu : usr=98.58%, sys=0.74%, ctx=690, majf=0, minf=106504 00:12:07.955 IO depths : 1=12.4%, 2=24.8%, 4=50.2%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.955 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.955 issued rwts: total=842092,1287441,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.955 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:07.955 00:12:07.955 Run status group 0 (all jobs): 00:12:07.955 READ: bw=329MiB/s (345MB/s), 329MiB/s-329MiB/s (345MB/s-345MB/s), io=3289MiB (3449MB), run=10001-10001msec 00:12:07.955 WRITE: bw=509MiB/s (534MB/s), 509MiB/s-509MiB/s (534MB/s-534MB/s), io=5029MiB (5273MB), run=9880-9880msec 00:12:10.492 ----------------------------------------------------- 00:12:10.492 Suppressions used: 00:12:10.492 count bytes template 00:12:10.492 16 140 /usr/src/fio/parse.c 00:12:10.492 11105 1066080 /usr/src/fio/iolog.c 00:12:10.492 1 8 libtcmalloc_minimal.so 00:12:10.492 1 904 libcrypto.so 00:12:10.492 ----------------------------------------------------- 00:12:10.492 00:12:10.812 00:12:10.812 real 0m15.569s 00:12:10.812 user 2m55.845s 00:12:10.812 sys 0m2.835s 00:12:10.812 04:07:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:10.812 04:07:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:12:10.812 ************************************ 00:12:10.812 END TEST bdev_fio_rw_verify 00:12:10.812 ************************************ 00:12:10.812 04:07:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:12:10.812 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:12:10.812 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:12:10.812 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:12:10.812 04:07:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:12:10.812 04:07:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:12:10.812 04:07:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:12:10.812 04:07:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:12:10.812 04:07:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:12:10.812 04:07:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:12:10.812 04:07:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:12:10.812 04:07:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:12:10.812 04:07:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:12:10.812 04:07:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:12:10.812 04:07:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:12:10.812 04:07:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:12:10.812 04:07:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:12:10.812 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:10.814 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "a6617569-84d5-45bd-bd97-0f2cbb1fd7df"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a6617569-84d5-45bd-bd97-0f2cbb1fd7df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "ff8c56ef-be1d-56bf-8d0c-3a3077549f3b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ff8c56ef-be1d-56bf-8d0c-3a3077549f3b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "442dcccd-9e1e-5ef1-86af-82bd1e0b4b0a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "442dcccd-9e1e-5ef1-86af-82bd1e0b4b0a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "a1c40ad0-c278-5297-8775-ffc59fffdfed"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a1c40ad0-c278-5297-8775-ffc59fffdfed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "6b70bb0b-3eec-5294-9171-03958ab6d4f1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6b70bb0b-3eec-5294-9171-03958ab6d4f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "11a618e7-00ba-5905-bb2a-3b8dc27a3d88"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "11a618e7-00ba-5905-bb2a-3b8dc27a3d88",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "ead308d9-10cd-565f-95bb-898c5663f021"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ead308d9-10cd-565f-95bb-898c5663f021",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "626b387c-4402-502e-8f0d-1b46fabe8854"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "626b387c-4402-502e-8f0d-1b46fabe8854",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "63ca1192-e801-527d-8cc1-e150f0225923"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "63ca1192-e801-527d-8cc1-e150f0225923",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "2ea1142e-fd93-5f4a-9599-d7c3aa53e89e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2ea1142e-fd93-5f4a-9599-d7c3aa53e89e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "514a7de2-9da9-5fee-8022-70240626f462"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "514a7de2-9da9-5fee-8022-70240626f462",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "b48d8bc4-fb76-56d9-87a9-5f46ba22eaaa"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b48d8bc4-fb76-56d9-87a9-5f46ba22eaaa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "10b9ee6c-517a-4142-a4e6-296369e49abb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "10b9ee6c-517a-4142-a4e6-296369e49abb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "10b9ee6c-517a-4142-a4e6-296369e49abb",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "616ce074-9819-4024-b9b0-95bb909a8c0e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "827ff825-4234-43ad-860d-1fddf3a09830",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "58e522e2-3ab0-42d9-b191-fc6e6c42a95e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "58e522e2-3ab0-42d9-b191-fc6e6c42a95e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "58e522e2-3ab0-42d9-b191-fc6e6c42a95e",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "209b6700-155b-4a1d-ae1f-bdced187a5f4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "d12133f4-137b-402a-8ce9-8f557dc43832",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "e886e37d-a8c2-4f10-ac11-ad07863b5133"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "e886e37d-a8c2-4f10-ac11-ad07863b5133",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e886e37d-a8c2-4f10-ac11-ad07863b5133",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "37309667-c327-49e0-90c4-3373c3d4bc59",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "3adfb512-b8df-4e4a-b6f4-a372fba4fb13",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "47a5aef7-b19f-4cf3-b2d5-754ab9569a47"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "47a5aef7-b19f-4cf3-b2d5-754ab9569a47",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:12:10.814 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Malloc0 00:12:10.814 Malloc1p0 00:12:10.814 Malloc1p1 00:12:10.814 Malloc2p0 00:12:10.814 Malloc2p1 00:12:10.814 Malloc2p2 00:12:10.814 Malloc2p3 00:12:10.814 Malloc2p4 00:12:10.814 Malloc2p5 00:12:10.814 Malloc2p6 00:12:10.814 Malloc2p7 00:12:10.814 TestPT 00:12:10.814 raid0 00:12:10.814 concat0 ]] 00:12:10.814 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "a6617569-84d5-45bd-bd97-0f2cbb1fd7df"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a6617569-84d5-45bd-bd97-0f2cbb1fd7df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "ff8c56ef-be1d-56bf-8d0c-3a3077549f3b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ff8c56ef-be1d-56bf-8d0c-3a3077549f3b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "442dcccd-9e1e-5ef1-86af-82bd1e0b4b0a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "442dcccd-9e1e-5ef1-86af-82bd1e0b4b0a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "a1c40ad0-c278-5297-8775-ffc59fffdfed"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a1c40ad0-c278-5297-8775-ffc59fffdfed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "6b70bb0b-3eec-5294-9171-03958ab6d4f1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6b70bb0b-3eec-5294-9171-03958ab6d4f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "11a618e7-00ba-5905-bb2a-3b8dc27a3d88"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "11a618e7-00ba-5905-bb2a-3b8dc27a3d88",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "ead308d9-10cd-565f-95bb-898c5663f021"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ead308d9-10cd-565f-95bb-898c5663f021",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "626b387c-4402-502e-8f0d-1b46fabe8854"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "626b387c-4402-502e-8f0d-1b46fabe8854",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "63ca1192-e801-527d-8cc1-e150f0225923"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "63ca1192-e801-527d-8cc1-e150f0225923",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "2ea1142e-fd93-5f4a-9599-d7c3aa53e89e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2ea1142e-fd93-5f4a-9599-d7c3aa53e89e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "514a7de2-9da9-5fee-8022-70240626f462"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "514a7de2-9da9-5fee-8022-70240626f462",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "b48d8bc4-fb76-56d9-87a9-5f46ba22eaaa"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b48d8bc4-fb76-56d9-87a9-5f46ba22eaaa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "10b9ee6c-517a-4142-a4e6-296369e49abb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "10b9ee6c-517a-4142-a4e6-296369e49abb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "10b9ee6c-517a-4142-a4e6-296369e49abb",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "616ce074-9819-4024-b9b0-95bb909a8c0e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "827ff825-4234-43ad-860d-1fddf3a09830",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "58e522e2-3ab0-42d9-b191-fc6e6c42a95e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "58e522e2-3ab0-42d9-b191-fc6e6c42a95e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "58e522e2-3ab0-42d9-b191-fc6e6c42a95e",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "209b6700-155b-4a1d-ae1f-bdced187a5f4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "d12133f4-137b-402a-8ce9-8f557dc43832",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "e886e37d-a8c2-4f10-ac11-ad07863b5133"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "e886e37d-a8c2-4f10-ac11-ad07863b5133",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e886e37d-a8c2-4f10-ac11-ad07863b5133",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "37309667-c327-49e0-90c4-3373c3d4bc59",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "3adfb512-b8df-4e4a-b6f4-a372fba4fb13",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "47a5aef7-b19f-4cf3-b2d5-754ab9569a47"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "47a5aef7-b19f-4cf3-b2d5-754ab9569a47",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc0]' 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc0 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p0]' 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p0 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p1]' 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p1 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p0]' 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p0 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p1]' 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p1 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p2]' 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p2 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p3]' 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p3 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p4]' 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p4 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p5]' 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p5 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p6]' 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p6 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p7]' 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p7 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_TestPT]' 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=TestPT 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_raid0]' 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=raid0 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_concat0]' 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=concat0 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:10.815 04:07:19 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:10.815 ************************************ 00:12:10.815 START TEST bdev_fio_trim 00:12:10.816 ************************************ 00:12:10.816 04:07:19 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:12:10.816 04:07:19 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:12:10.816 04:07:19 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:10.816 04:07:19 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:10.816 04:07:19 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:10.816 04:07:19 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:12:10.816 04:07:19 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:12:10.816 04:07:19 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:10.816 04:07:19 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:10.816 04:07:19 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:12:10.816 04:07:19 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:12:10.816 04:07:19 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:11.100 04:07:19 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:11.100 04:07:19 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:11.100 04:07:19 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:12:11.100 04:07:19 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:12:11.100 04:07:19 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:12:11.363 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:11.363 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:11.363 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:11.363 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:11.363 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:11.363 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:11.363 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:11.363 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:11.363 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:11.363 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:11.363 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:11.363 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:11.363 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:11.363 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:11.363 fio-3.35 00:12:11.363 Starting 14 threads 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:11.622 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.622 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:23.848 00:12:23.848 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=2592936: Tue Jul 23 04:07:32 2024 00:12:23.848 write: IOPS=120k, BW=469MiB/s (492MB/s)(4695MiB/10001msec); 0 zone resets 00:12:23.848 slat (usec): min=9, max=470, avg=41.12, stdev=11.16 00:12:23.848 clat (usec): min=36, max=1198, avg=288.69, stdev=98.61 00:12:23.848 lat (usec): min=46, max=1267, avg=329.82, stdev=102.56 00:12:23.848 clat percentiles (usec): 00:12:23.848 | 50.000th=[ 281], 99.000th=[ 529], 99.900th=[ 652], 99.990th=[ 840], 00:12:23.848 | 99.999th=[ 971] 00:12:23.848 bw ( KiB/s): min=446975, max=547160, per=100.00%, avg=481464.37, stdev=2096.50, samples=266 00:12:23.848 iops : min=111745, max=136789, avg=120365.68, stdev=524.12, samples=266 00:12:23.848 trim: IOPS=120k, BW=469MiB/s (492MB/s)(4695MiB/10001msec); 0 zone resets 00:12:23.848 slat (usec): min=5, max=462, avg=28.72, stdev= 7.98 00:12:23.848 clat (usec): min=27, max=1268, avg=330.00, stdev=102.58 00:12:23.848 lat (usec): min=45, max=1331, avg=358.72, stdev=105.69 00:12:23.848 clat percentiles (usec): 00:12:23.848 | 50.000th=[ 322], 99.000th=[ 586], 99.900th=[ 709], 99.990th=[ 930], 00:12:23.848 | 99.999th=[ 1074] 00:12:23.848 bw ( KiB/s): min=446983, max=547160, per=100.00%, avg=481464.37, stdev=2096.49, samples=266 00:12:23.848 iops : min=111747, max=136789, avg=120365.68, stdev=524.12, samples=266 00:12:23.848 lat (usec) : 50=0.02%, 100=0.56%, 250=31.57%, 500=65.07%, 750=2.76% 00:12:23.848 lat (usec) : 1000=0.03% 00:12:23.848 lat (msec) : 2=0.01% 00:12:23.848 cpu : usr=99.53%, sys=0.03%, ctx=487, majf=0, minf=15657 00:12:23.848 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.848 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.848 issued rwts: total=0,1201918,1201922,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.848 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:23.848 00:12:23.848 Run status group 0 (all jobs): 00:12:23.848 WRITE: bw=469MiB/s (492MB/s), 469MiB/s-469MiB/s (492MB/s-492MB/s), io=4695MiB (4923MB), run=10001-10001msec 00:12:23.848 TRIM: bw=469MiB/s (492MB/s), 469MiB/s-469MiB/s (492MB/s-492MB/s), io=4695MiB (4923MB), run=10001-10001msec 00:12:27.138 ----------------------------------------------------- 00:12:27.138 Suppressions used: 00:12:27.138 count bytes template 00:12:27.138 14 129 /usr/src/fio/parse.c 00:12:27.138 1 8 libtcmalloc_minimal.so 00:12:27.138 1 904 libcrypto.so 00:12:27.138 ----------------------------------------------------- 00:12:27.138 00:12:27.138 00:12:27.138 real 0m15.962s 00:12:27.138 user 2m38.348s 00:12:27.138 sys 0m1.365s 00:12:27.138 04:07:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:27.138 04:07:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:12:27.138 ************************************ 00:12:27.138 END TEST bdev_fio_trim 00:12:27.138 ************************************ 00:12:27.138 04:07:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:12:27.138 04:07:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:12:27.138 04:07:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:12:27.138 04:07:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:12:27.138 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:12:27.138 04:07:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:12:27.138 00:12:27.138 real 0m31.916s 00:12:27.139 user 5m34.414s 00:12:27.139 sys 0m4.397s 00:12:27.139 04:07:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:27.139 04:07:35 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:27.139 ************************************ 00:12:27.139 END TEST bdev_fio 00:12:27.139 ************************************ 00:12:27.139 04:07:35 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:12:27.139 04:07:35 blockdev_general -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:27.139 04:07:35 blockdev_general -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:27.139 04:07:35 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:12:27.139 04:07:35 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:27.139 04:07:35 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:27.139 ************************************ 00:12:27.139 START TEST bdev_verify 00:12:27.139 ************************************ 00:12:27.139 04:07:35 blockdev_general.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:27.139 [2024-07-23 04:07:35.746130] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:27.139 [2024-07-23 04:07:35.746254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2595336 ] 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:27.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.139 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:27.399 [2024-07-23 04:07:35.972343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:27.658 [2024-07-23 04:07:36.235821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.658 [2024-07-23 04:07:36.235828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.225 [2024-07-23 04:07:36.782048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:28.225 [2024-07-23 04:07:36.782121] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:28.226 [2024-07-23 04:07:36.782146] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:28.226 [2024-07-23 04:07:36.790051] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:28.226 [2024-07-23 04:07:36.790099] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:28.226 [2024-07-23 04:07:36.798054] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:28.226 [2024-07-23 04:07:36.798092] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:28.484 [2024-07-23 04:07:37.046206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:28.484 [2024-07-23 04:07:37.046265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.484 [2024-07-23 04:07:37.046287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:12:28.484 [2024-07-23 04:07:37.046302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.484 [2024-07-23 04:07:37.049065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.484 [2024-07-23 04:07:37.049100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:29.052 Running I/O for 5 seconds... 00:12:34.325 00:12:34.325 Latency(us) 00:12:34.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.325 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x0 length 0x1000 00:12:34.325 Malloc0 : 5.08 1058.07 4.13 0.00 0.00 120740.19 599.65 446273.95 00:12:34.325 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x1000 length 0x1000 00:12:34.325 Malloc0 : 5.22 1030.53 4.03 0.00 0.00 123969.09 589.82 489894.71 00:12:34.325 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x0 length 0x800 00:12:34.325 Malloc1p0 : 5.08 553.99 2.16 0.00 0.00 229707.78 3460.30 255013.68 00:12:34.325 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x800 length 0x800 00:12:34.325 Malloc1p0 : 5.27 558.96 2.18 0.00 0.00 227748.56 3486.52 256691.40 00:12:34.325 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x0 length 0x800 00:12:34.325 Malloc1p1 : 5.28 558.03 2.18 0.00 0.00 227321.36 3512.73 253335.96 00:12:34.325 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x800 length 0x800 00:12:34.325 Malloc1p1 : 5.27 558.71 2.18 0.00 0.00 227082.96 3512.73 253335.96 00:12:34.325 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x0 length 0x200 00:12:34.325 Malloc2p0 : 5.28 557.68 2.18 0.00 0.00 226726.55 3538.94 244947.35 00:12:34.325 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x200 length 0x200 00:12:34.325 Malloc2p0 : 5.27 558.46 2.18 0.00 0.00 226431.13 3538.94 244947.35 00:12:34.325 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x0 length 0x200 00:12:34.325 Malloc2p1 : 5.28 557.33 2.18 0.00 0.00 226122.36 3460.30 238236.47 00:12:34.325 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x200 length 0x200 00:12:34.325 Malloc2p1 : 5.27 558.21 2.18 0.00 0.00 225769.77 3460.30 238236.47 00:12:34.325 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x0 length 0x200 00:12:34.325 Malloc2p2 : 5.29 556.98 2.18 0.00 0.00 225488.58 3486.52 231525.58 00:12:34.325 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x200 length 0x200 00:12:34.325 Malloc2p2 : 5.28 557.97 2.18 0.00 0.00 225091.72 3486.52 231525.58 00:12:34.325 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x0 length 0x200 00:12:34.325 Malloc2p3 : 5.29 556.63 2.17 0.00 0.00 224863.81 3512.73 224814.69 00:12:34.325 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x200 length 0x200 00:12:34.325 Malloc2p3 : 5.28 557.62 2.18 0.00 0.00 224469.85 3538.94 224814.69 00:12:34.325 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x0 length 0x200 00:12:34.325 Malloc2p4 : 5.29 556.28 2.17 0.00 0.00 224234.15 3591.37 221459.25 00:12:34.325 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x200 length 0x200 00:12:34.325 Malloc2p4 : 5.28 557.27 2.18 0.00 0.00 223848.76 3591.37 219781.53 00:12:34.325 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x0 length 0x200 00:12:34.325 Malloc2p5 : 5.30 555.93 2.17 0.00 0.00 223648.81 3538.94 216426.09 00:12:34.325 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x200 length 0x200 00:12:34.325 Malloc2p5 : 5.29 556.92 2.18 0.00 0.00 223258.29 3565.16 216426.09 00:12:34.325 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x0 length 0x200 00:12:34.325 Malloc2p6 : 5.30 555.64 2.17 0.00 0.00 222982.07 3486.52 208037.48 00:12:34.325 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x200 length 0x200 00:12:34.325 Malloc2p6 : 5.29 556.57 2.17 0.00 0.00 222621.94 3512.73 207198.62 00:12:34.325 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x0 length 0x200 00:12:34.325 Malloc2p7 : 5.30 555.16 2.17 0.00 0.00 222415.89 3460.30 200487.73 00:12:34.325 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x200 length 0x200 00:12:34.325 Malloc2p7 : 5.29 556.22 2.17 0.00 0.00 221998.20 3460.30 200487.73 00:12:34.325 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x0 length 0x1000 00:12:34.325 TestPT : 5.32 553.55 2.16 0.00 0.00 222250.21 10223.62 200487.73 00:12:34.325 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x1000 length 0x1000 00:12:34.325 TestPT : 5.31 532.68 2.08 0.00 0.00 230313.42 16357.79 276824.06 00:12:34.325 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x0 length 0x2000 00:12:34.325 raid0 : 5.31 554.48 2.17 0.00 0.00 221067.50 3696.23 180355.07 00:12:34.325 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x2000 length 0x2000 00:12:34.325 raid0 : 5.30 555.67 2.17 0.00 0.00 220522.08 3696.23 171127.60 00:12:34.325 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x0 length 0x2000 00:12:34.325 concat0 : 5.31 554.26 2.17 0.00 0.00 220395.46 3670.02 173644.19 00:12:34.325 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x2000 length 0x2000 00:12:34.325 concat0 : 5.30 555.19 2.17 0.00 0.00 219994.02 3670.02 167772.16 00:12:34.325 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x0 length 0x1000 00:12:34.325 raid1 : 5.31 554.04 2.16 0.00 0.00 219784.70 4849.66 171127.60 00:12:34.325 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x1000 length 0x1000 00:12:34.325 raid1 : 5.31 554.75 2.17 0.00 0.00 219475.46 4823.45 171127.60 00:12:34.325 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x0 length 0x4e2 00:12:34.325 AIO0 : 5.32 553.87 2.16 0.00 0.00 219075.00 1992.29 174483.05 00:12:34.325 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:34.325 Verification LBA range: start 0x4e2 length 0x4e2 00:12:34.325 AIO0 : 5.32 577.89 2.26 0.00 0.00 209946.32 976.49 178677.35 00:12:34.325 =================================================================================================================== 00:12:34.326 Total : 18775.54 73.34 0.00 0.00 212466.72 589.82 489894.71 00:12:37.631 00:12:37.631 real 0m10.469s 00:12:37.631 user 0m18.986s 00:12:37.631 sys 0m0.602s 00:12:37.631 04:07:46 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:37.631 04:07:46 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:37.631 ************************************ 00:12:37.631 END TEST bdev_verify 00:12:37.631 ************************************ 00:12:37.631 04:07:46 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:12:37.631 04:07:46 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:37.631 04:07:46 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:12:37.631 04:07:46 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.631 04:07:46 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:37.631 ************************************ 00:12:37.631 START TEST bdev_verify_big_io 00:12:37.631 ************************************ 00:12:37.631 04:07:46 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:37.631 [2024-07-23 04:07:46.308805] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:37.631 [2024-07-23 04:07:46.308922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2597052 ] 00:12:37.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.890 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:37.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.890 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:37.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.890 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:37.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.890 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:37.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.890 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:37.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.890 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:37.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.890 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:37.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:37.891 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:37.891 [2024-07-23 04:07:46.536792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:38.150 [2024-07-23 04:07:46.821364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.150 [2024-07-23 04:07:46.821368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.719 [2024-07-23 04:07:47.419883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:38.719 [2024-07-23 04:07:47.419954] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:38.719 [2024-07-23 04:07:47.419974] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:38.719 [2024-07-23 04:07:47.427884] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:38.719 [2024-07-23 04:07:47.427932] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:38.719 [2024-07-23 04:07:47.435899] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:38.719 [2024-07-23 04:07:47.435937] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:38.978 [2024-07-23 04:07:47.695128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:38.978 [2024-07-23 04:07:47.695198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.978 [2024-07-23 04:07:47.695220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:12:38.978 [2024-07-23 04:07:47.695236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.978 [2024-07-23 04:07:47.698007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.978 [2024-07-23 04:07:47.698044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:39.547 [2024-07-23 04:07:48.235808] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:39.548 [2024-07-23 04:07:48.241113] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:39.548 [2024-07-23 04:07:48.246872] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:39.548 [2024-07-23 04:07:48.252057] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:39.548 [2024-07-23 04:07:48.257767] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:39.548 [2024-07-23 04:07:48.263433] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:39.548 [2024-07-23 04:07:48.268621] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:39.548 [2024-07-23 04:07:48.274531] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:39.548 [2024-07-23 04:07:48.279669] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:39.548 [2024-07-23 04:07:48.285439] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:39.548 [2024-07-23 04:07:48.290675] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:39.548 [2024-07-23 04:07:48.296530] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:39.548 [2024-07-23 04:07:48.301675] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:39.548 [2024-07-23 04:07:48.307320] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:39.548 [2024-07-23 04:07:48.312597] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:39.548 [2024-07-23 04:07:48.318336] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:39.807 [2024-07-23 04:07:48.450182] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:39.807 [2024-07-23 04:07:48.460693] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:39.807 Running I/O for 5 seconds... 00:12:47.971 00:12:47.971 Latency(us) 00:12:47.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.971 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:47.971 Verification LBA range: start 0x0 length 0x100 00:12:47.971 Malloc0 : 5.72 179.10 11.19 0.00 0.00 700046.28 838.86 1946157.06 00:12:47.971 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:47.971 Verification LBA range: start 0x100 length 0x100 00:12:47.971 Malloc0 : 5.77 155.33 9.71 0.00 0.00 808598.28 858.52 2174327.19 00:12:47.971 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:47.971 Verification LBA range: start 0x0 length 0x80 00:12:47.971 Malloc1p0 : 6.81 35.26 2.20 0.00 0.00 3210037.84 1500.77 5315022.03 00:12:47.971 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:47.971 Verification LBA range: start 0x80 length 0x80 00:12:47.971 Malloc1p0 : 6.17 88.86 5.55 0.00 0.00 1317844.47 2424.83 2603823.92 00:12:47.971 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:47.971 Verification LBA range: start 0x0 length 0x80 00:12:47.971 Malloc1p1 : 6.83 37.47 2.34 0.00 0.00 2977420.97 1500.77 5127117.21 00:12:47.971 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:47.971 Verification LBA range: start 0x80 length 0x80 00:12:47.971 Malloc1p1 : 6.57 36.52 2.28 0.00 0.00 3080176.99 1461.45 5207647.85 00:12:47.971 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:47.971 Verification LBA range: start 0x0 length 0x20 00:12:47.971 Malloc2p0 : 6.18 25.87 1.62 0.00 0.00 1090950.97 642.25 2093796.56 00:12:47.971 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:47.971 Verification LBA range: start 0x20 length 0x20 00:12:47.971 Malloc2p0 : 6.17 25.94 1.62 0.00 0.00 1104076.61 668.47 1905891.74 00:12:47.971 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:47.971 Verification LBA range: start 0x0 length 0x20 00:12:47.971 Malloc2p1 : 6.19 25.87 1.62 0.00 0.00 1081006.91 635.70 2066953.01 00:12:47.971 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:47.971 Verification LBA range: start 0x20 length 0x20 00:12:47.971 Malloc2p1 : 6.17 25.93 1.62 0.00 0.00 1094965.63 642.25 1879048.19 00:12:47.971 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:47.971 Verification LBA range: start 0x0 length 0x20 00:12:47.971 Malloc2p2 : 6.19 25.86 1.62 0.00 0.00 1071036.02 648.81 2026687.69 00:12:47.971 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:47.971 Verification LBA range: start 0x20 length 0x20 00:12:47.971 Malloc2p2 : 6.17 25.92 1.62 0.00 0.00 1086125.23 642.25 1865626.42 00:12:47.971 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:47.971 Verification LBA range: start 0x0 length 0x20 00:12:47.971 Malloc2p3 : 6.19 25.85 1.62 0.00 0.00 1060979.60 635.70 1999844.15 00:12:47.971 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:47.971 Verification LBA range: start 0x20 length 0x20 00:12:47.971 Malloc2p3 : 6.17 25.92 1.62 0.00 0.00 1077121.12 635.70 1838782.87 00:12:47.971 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:47.971 Verification LBA range: start 0x0 length 0x20 00:12:47.971 Malloc2p4 : 6.30 27.94 1.75 0.00 0.00 979266.23 658.64 1973000.60 00:12:47.971 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:47.971 Verification LBA range: start 0x20 length 0x20 00:12:47.971 Malloc2p4 : 6.17 25.91 1.62 0.00 0.00 1067531.34 638.98 1811939.33 00:12:47.971 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:47.971 Verification LBA range: start 0x0 length 0x20 00:12:47.971 Malloc2p5 : 6.30 27.94 1.75 0.00 0.00 970195.70 648.81 1946157.06 00:12:47.971 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:47.971 Verification LBA range: start 0x20 length 0x20 00:12:47.971 Malloc2p5 : 6.18 25.91 1.62 0.00 0.00 1058680.90 645.53 1798517.56 00:12:47.971 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:47.971 Verification LBA range: start 0x0 length 0x20 00:12:47.971 Malloc2p6 : 6.30 27.93 1.75 0.00 0.00 961050.58 645.53 1919313.51 00:12:47.971 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:47.971 Verification LBA range: start 0x20 length 0x20 00:12:47.971 Malloc2p6 : 6.18 25.90 1.62 0.00 0.00 1050139.58 642.25 1771674.01 00:12:47.971 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:47.971 Verification LBA range: start 0x0 length 0x20 00:12:47.972 Malloc2p7 : 6.30 27.92 1.75 0.00 0.00 951450.50 642.25 1892469.96 00:12:47.972 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:47.972 Verification LBA range: start 0x20 length 0x20 00:12:47.972 Malloc2p7 : 6.18 25.89 1.62 0.00 0.00 1040825.35 642.25 1758252.24 00:12:47.972 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:47.972 Verification LBA range: start 0x0 length 0x100 00:12:47.972 TestPT : 6.90 39.44 2.46 0.00 0.00 2564108.18 1507.33 4697620.48 00:12:47.972 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:47.972 Verification LBA range: start 0x100 length 0x100 00:12:47.972 TestPT : 6.64 34.01 2.13 0.00 0.00 3018973.72 91435.83 3489660.93 00:12:47.972 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:47.972 Verification LBA range: start 0x0 length 0x200 00:12:47.972 raid0 : 6.81 44.62 2.79 0.00 0.00 2218229.19 1599.08 4509715.66 00:12:47.972 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:47.972 Verification LBA range: start 0x200 length 0x200 00:12:47.972 raid0 : 6.57 43.81 2.74 0.00 0.00 2284946.08 1585.97 4697620.48 00:12:47.972 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:47.972 Verification LBA range: start 0x0 length 0x200 00:12:47.972 concat0 : 6.84 49.15 3.07 0.00 0.00 1994443.27 1638.40 4348654.39 00:12:47.972 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:47.972 Verification LBA range: start 0x200 length 0x200 00:12:47.972 concat0 : 6.65 52.96 3.31 0.00 0.00 1877789.15 1612.19 4536559.21 00:12:47.972 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:47.972 Verification LBA range: start 0x0 length 0x100 00:12:47.972 raid1 : 6.84 76.03 4.75 0.00 0.00 1248148.68 2070.94 4160749.57 00:12:47.972 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:47.972 Verification LBA range: start 0x100 length 0x100 00:12:47.972 raid1 : 6.68 52.68 3.29 0.00 0.00 1834505.40 2057.83 4375497.93 00:12:47.972 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:12:47.972 Verification LBA range: start 0x0 length 0x4e 00:12:47.972 AIO0 : 6.87 66.63 4.16 0.00 0.00 843396.02 779.88 2670932.79 00:12:47.972 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:12:47.972 Verification LBA range: start 0x4e length 0x4e 00:12:47.972 AIO0 : 6.80 74.45 4.65 0.00 0.00 777107.47 763.49 2778306.97 00:12:47.972 =================================================================================================================== 00:12:47.972 Total : 1488.82 93.05 0.00 0.00 1407962.15 635.70 5315022.03 00:12:50.510 00:12:50.510 real 0m13.027s 00:12:50.510 user 0m24.054s 00:12:50.510 sys 0m0.602s 00:12:50.510 04:07:59 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:50.510 04:07:59 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.510 ************************************ 00:12:50.510 END TEST bdev_verify_big_io 00:12:50.510 ************************************ 00:12:50.510 04:07:59 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:12:50.510 04:07:59 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:50.510 04:07:59 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:50.510 04:07:59 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:50.510 04:07:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:50.769 ************************************ 00:12:50.769 START TEST bdev_write_zeroes 00:12:50.769 ************************************ 00:12:50.769 04:07:59 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:50.769 [2024-07-23 04:07:59.411786] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:50.769 [2024-07-23 04:07:59.411900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2599420 ] 00:12:50.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.769 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:50.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.769 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:50.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.769 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:50.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.769 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:50.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.769 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:50.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.769 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:50.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.769 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:50.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.769 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:50.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.769 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:50.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.769 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:50.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.769 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:50.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.769 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:50.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.769 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:50.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.769 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:50.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.769 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:50.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.769 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:50.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.769 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:50.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:50.769 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:50.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.029 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:51.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.029 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:51.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.029 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:51.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.029 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:51.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.029 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:51.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.029 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:51.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.029 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:51.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.029 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:51.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.029 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:51.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.029 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:51.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.029 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:51.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.029 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:51.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.029 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:51.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.029 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:51.029 [2024-07-23 04:07:59.640856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.289 [2024-07-23 04:07:59.905609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.858 [2024-07-23 04:08:00.482469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:51.858 [2024-07-23 04:08:00.482539] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:51.858 [2024-07-23 04:08:00.482564] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:51.858 [2024-07-23 04:08:00.490450] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:51.858 [2024-07-23 04:08:00.490495] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:51.858 [2024-07-23 04:08:00.498456] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:51.858 [2024-07-23 04:08:00.498496] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:52.121 [2024-07-23 04:08:00.762046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:52.121 [2024-07-23 04:08:00.762107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.121 [2024-07-23 04:08:00.762129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:12:52.121 [2024-07-23 04:08:00.762151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.121 [2024-07-23 04:08:00.764860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.121 [2024-07-23 04:08:00.764897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:52.691 Running I/O for 1 seconds... 00:12:53.629 00:12:53.629 Latency(us) 00:12:53.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.629 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:53.629 Malloc0 : 1.05 4998.93 19.53 0.00 0.00 25579.34 638.98 42781.90 00:12:53.629 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:53.629 Malloc1p0 : 1.05 4991.90 19.50 0.00 0.00 25572.28 891.29 41943.04 00:12:53.629 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:53.629 Malloc1p1 : 1.05 4984.97 19.47 0.00 0.00 25554.04 878.18 41104.18 00:12:53.629 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:53.629 Malloc2p0 : 1.05 4978.10 19.45 0.00 0.00 25537.24 884.74 40265.32 00:12:53.629 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:53.629 Malloc2p1 : 1.06 4971.26 19.42 0.00 0.00 25518.80 904.40 39426.46 00:12:53.629 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:53.629 Malloc2p2 : 1.06 4964.40 19.39 0.00 0.00 25494.14 878.18 38377.88 00:12:53.629 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:53.629 Malloc2p3 : 1.06 4957.60 19.37 0.00 0.00 25472.63 878.18 37539.02 00:12:53.629 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:53.629 Malloc2p4 : 1.06 4950.78 19.34 0.00 0.00 25451.15 878.18 36700.16 00:12:53.629 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:53.629 Malloc2p5 : 1.06 4944.02 19.31 0.00 0.00 25430.33 884.74 35861.30 00:12:53.629 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:53.629 Malloc2p6 : 1.06 4937.24 19.29 0.00 0.00 25412.53 878.18 35022.44 00:12:53.629 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:53.629 Malloc2p7 : 1.06 4930.53 19.26 0.00 0.00 25395.19 878.18 34183.58 00:12:53.629 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:53.629 TestPT : 1.07 4923.79 19.23 0.00 0.00 25369.70 917.50 33135.00 00:12:53.629 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:53.629 raid0 : 1.07 4915.97 19.20 0.00 0.00 25341.16 1677.72 31457.28 00:12:53.629 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:53.629 concat0 : 1.07 4908.25 19.17 0.00 0.00 25285.13 1664.61 29779.56 00:12:53.629 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:53.629 raid1 : 1.07 4898.56 19.13 0.00 0.00 25225.82 2713.19 27053.26 00:12:53.629 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:53.629 AIO0 : 1.07 4892.82 19.11 0.00 0.00 25132.29 1002.70 26214.40 00:12:53.629 =================================================================================================================== 00:12:53.629 Total : 79149.09 309.18 0.00 0.00 25423.24 638.98 42781.90 00:12:56.920 00:12:56.920 real 0m6.246s 00:12:56.920 user 0m5.647s 00:12:56.920 sys 0m0.504s 00:12:56.920 04:08:05 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:56.920 04:08:05 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:12:56.920 ************************************ 00:12:56.920 END TEST bdev_write_zeroes 00:12:56.920 ************************************ 00:12:56.920 04:08:05 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:12:56.920 04:08:05 blockdev_general -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:56.920 04:08:05 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:56.920 04:08:05 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:56.920 04:08:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:56.920 ************************************ 00:12:56.920 START TEST bdev_json_nonenclosed 00:12:56.920 ************************************ 00:12:56.920 04:08:05 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:57.179 [2024-07-23 04:08:05.746540] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:57.179 [2024-07-23 04:08:05.746656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2600503 ] 00:12:57.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.179 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:57.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.179 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:57.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.179 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:57.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.179 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:57.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.179 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:57.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:57.180 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:57.439 [2024-07-23 04:08:05.971532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.698 [2024-07-23 04:08:06.256491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.698 [2024-07-23 04:08:06.256575] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:57.699 [2024-07-23 04:08:06.256601] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:57.699 [2024-07-23 04:08:06.256617] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:58.267 00:12:58.267 real 0m1.204s 00:12:58.267 user 0m0.928s 00:12:58.267 sys 0m0.269s 00:12:58.268 04:08:06 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:12:58.268 04:08:06 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:58.268 04:08:06 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:12:58.268 ************************************ 00:12:58.268 END TEST bdev_json_nonenclosed 00:12:58.268 ************************************ 00:12:58.268 04:08:06 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:12:58.268 04:08:06 blockdev_general -- bdev/blockdev.sh@781 -- # true 00:12:58.268 04:08:06 blockdev_general -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:58.268 04:08:06 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:58.268 04:08:06 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:58.268 04:08:06 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:58.268 ************************************ 00:12:58.268 START TEST bdev_json_nonarray 00:12:58.268 ************************************ 00:12:58.268 04:08:06 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:58.268 [2024-07-23 04:08:07.037445] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:58.268 [2024-07-23 04:08:07.037560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2600687 ] 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:58.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:58.527 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:58.527 [2024-07-23 04:08:07.261704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.786 [2024-07-23 04:08:07.547877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.786 [2024-07-23 04:08:07.547974] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:58.786 [2024-07-23 04:08:07.548009] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:58.786 [2024-07-23 04:08:07.548025] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:59.354 00:12:59.354 real 0m1.174s 00:12:59.354 user 0m0.893s 00:12:59.354 sys 0m0.274s 00:12:59.354 04:08:08 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:12:59.354 04:08:08 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:59.354 04:08:08 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:12:59.354 ************************************ 00:12:59.354 END TEST bdev_json_nonarray 00:12:59.354 ************************************ 00:12:59.624 04:08:08 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:12:59.624 04:08:08 blockdev_general -- bdev/blockdev.sh@784 -- # true 00:12:59.624 04:08:08 blockdev_general -- bdev/blockdev.sh@786 -- # [[ bdev == bdev ]] 00:12:59.624 04:08:08 blockdev_general -- bdev/blockdev.sh@787 -- # run_test bdev_qos qos_test_suite '' 00:12:59.624 04:08:08 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:59.624 04:08:08 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:59.624 04:08:08 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:59.624 ************************************ 00:12:59.624 START TEST bdev_qos 00:12:59.624 ************************************ 00:12:59.624 04:08:08 blockdev_general.bdev_qos -- common/autotest_common.sh@1123 -- # qos_test_suite '' 00:12:59.624 04:08:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # QOS_PID=2600833 00:12:59.624 04:08:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # echo 'Process qos testing pid: 2600833' 00:12:59.624 Process qos testing pid: 2600833 00:12:59.624 04:08:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@444 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:12:59.624 04:08:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:12:59.624 04:08:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # waitforlisten 2600833 00:12:59.624 04:08:08 blockdev_general.bdev_qos -- common/autotest_common.sh@829 -- # '[' -z 2600833 ']' 00:12:59.624 04:08:08 blockdev_general.bdev_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.624 04:08:08 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:59.624 04:08:08 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.624 04:08:08 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:59.624 04:08:08 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:59.624 [2024-07-23 04:08:08.303312] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:59.624 [2024-07-23 04:08:08.303432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2600833 ] 00:12:59.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.883 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:59.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.883 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:59.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.883 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:59.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.883 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:59.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.883 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:59.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.883 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:59.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.883 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:59.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:59.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.884 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:59.884 [2024-07-23 04:08:08.518360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.143 [2024-07-23 04:08:08.801534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.711 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:00.711 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@862 -- # return 0 00:13:00.711 04:08:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@450 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:13:00.711 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.711 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:00.711 Malloc_0 00:13:00.711 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.711 04:08:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # waitforbdev Malloc_0 00:13:00.711 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:13:00.711 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:00.711 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:13:00.711 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:00.711 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:00.711 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:00.711 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.711 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:00.711 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.711 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:13:00.711 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.711 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:00.970 [ 00:13:00.970 { 00:13:00.970 "name": "Malloc_0", 00:13:00.970 "aliases": [ 00:13:00.970 "7edc41ae-2303-45da-a5d9-edbc76883682" 00:13:00.970 ], 00:13:00.970 "product_name": "Malloc disk", 00:13:00.970 "block_size": 512, 00:13:00.970 "num_blocks": 262144, 00:13:00.970 "uuid": "7edc41ae-2303-45da-a5d9-edbc76883682", 00:13:00.970 "assigned_rate_limits": { 00:13:00.970 "rw_ios_per_sec": 0, 00:13:00.970 "rw_mbytes_per_sec": 0, 00:13:00.970 "r_mbytes_per_sec": 0, 00:13:00.970 "w_mbytes_per_sec": 0 00:13:00.970 }, 00:13:00.970 "claimed": false, 00:13:00.970 "zoned": false, 00:13:00.970 "supported_io_types": { 00:13:00.971 "read": true, 00:13:00.971 "write": true, 00:13:00.971 "unmap": true, 00:13:00.971 "flush": true, 00:13:00.971 "reset": true, 00:13:00.971 "nvme_admin": false, 00:13:00.971 "nvme_io": false, 00:13:00.971 "nvme_io_md": false, 00:13:00.971 "write_zeroes": true, 00:13:00.971 "zcopy": true, 00:13:00.971 "get_zone_info": false, 00:13:00.971 "zone_management": false, 00:13:00.971 "zone_append": false, 00:13:00.971 "compare": false, 00:13:00.971 "compare_and_write": false, 00:13:00.971 "abort": true, 00:13:00.971 "seek_hole": false, 00:13:00.971 "seek_data": false, 00:13:00.971 "copy": true, 00:13:00.971 "nvme_iov_md": false 00:13:00.971 }, 00:13:00.971 "memory_domains": [ 00:13:00.971 { 00:13:00.971 "dma_device_id": "system", 00:13:00.971 "dma_device_type": 1 00:13:00.971 }, 00:13:00.971 { 00:13:00.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.971 "dma_device_type": 2 00:13:00.971 } 00:13:00.971 ], 00:13:00.971 "driver_specific": {} 00:13:00.971 } 00:13:00.971 ] 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # rpc_cmd bdev_null_create Null_1 128 512 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:00.971 Null_1 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # waitforbdev Null_1 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:00.971 [ 00:13:00.971 { 00:13:00.971 "name": "Null_1", 00:13:00.971 "aliases": [ 00:13:00.971 "7794622c-dcb2-4864-9983-2e3639cb0cd4" 00:13:00.971 ], 00:13:00.971 "product_name": "Null disk", 00:13:00.971 "block_size": 512, 00:13:00.971 "num_blocks": 262144, 00:13:00.971 "uuid": "7794622c-dcb2-4864-9983-2e3639cb0cd4", 00:13:00.971 "assigned_rate_limits": { 00:13:00.971 "rw_ios_per_sec": 0, 00:13:00.971 "rw_mbytes_per_sec": 0, 00:13:00.971 "r_mbytes_per_sec": 0, 00:13:00.971 "w_mbytes_per_sec": 0 00:13:00.971 }, 00:13:00.971 "claimed": false, 00:13:00.971 "zoned": false, 00:13:00.971 "supported_io_types": { 00:13:00.971 "read": true, 00:13:00.971 "write": true, 00:13:00.971 "unmap": false, 00:13:00.971 "flush": false, 00:13:00.971 "reset": true, 00:13:00.971 "nvme_admin": false, 00:13:00.971 "nvme_io": false, 00:13:00.971 "nvme_io_md": false, 00:13:00.971 "write_zeroes": true, 00:13:00.971 "zcopy": false, 00:13:00.971 "get_zone_info": false, 00:13:00.971 "zone_management": false, 00:13:00.971 "zone_append": false, 00:13:00.971 "compare": false, 00:13:00.971 "compare_and_write": false, 00:13:00.971 "abort": true, 00:13:00.971 "seek_hole": false, 00:13:00.971 "seek_data": false, 00:13:00.971 "copy": false, 00:13:00.971 "nvme_iov_md": false 00:13:00.971 }, 00:13:00.971 "driver_specific": {} 00:13:00.971 } 00:13:00.971 ] 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # qos_function_test 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@409 -- # local qos_lower_iops_limit=1000 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_bw_limit=2 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@455 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local io_result=0 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local iops_limit=0 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local bw_limit=0 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # get_io_result IOPS Malloc_0 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:13:00.971 04:08:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:13:00.971 Running I/O for 60 seconds... 00:13:06.245 04:08:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 61342.23 245368.91 0.00 0.00 246784.00 0.00 0.00 ' 00:13:06.245 04:08:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:13:06.245 04:08:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:13:06.245 04:08:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # iostat_result=61342.23 00:13:06.245 04:08:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 61342 00:13:06.245 04:08:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # io_result=61342 00:13:06.245 04:08:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@417 -- # iops_limit=15000 00:13:06.245 04:08:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # '[' 15000 -gt 1000 ']' 00:13:06.245 04:08:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@421 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 15000 Malloc_0 00:13:06.245 04:08:14 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.245 04:08:14 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:06.245 04:08:14 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.245 04:08:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # run_test bdev_qos_iops run_qos_test 15000 IOPS Malloc_0 00:13:06.245 04:08:14 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:06.245 04:08:14 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:06.245 04:08:14 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:06.245 ************************************ 00:13:06.245 START TEST bdev_qos_iops 00:13:06.245 ************************************ 00:13:06.245 04:08:14 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1123 -- # run_qos_test 15000 IOPS Malloc_0 00:13:06.245 04:08:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@388 -- # local qos_limit=15000 00:13:06.245 04:08:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_result=0 00:13:06.245 04:08:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # get_io_result IOPS Malloc_0 00:13:06.245 04:08:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:13:06.245 04:08:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:13:06.245 04:08:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local iostat_result 00:13:06.245 04:08:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:13:06.245 04:08:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:06.245 04:08:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # tail -1 00:13:11.584 04:08:20 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 15000.25 60001.00 0.00 0.00 61020.00 0.00 0.00 ' 00:13:11.584 04:08:20 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:13:11.584 04:08:20 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:13:11.584 04:08:20 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # iostat_result=15000.25 00:13:11.584 04:08:20 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@384 -- # echo 15000 00:13:11.584 04:08:20 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # qos_result=15000 00:13:11.584 04:08:20 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # '[' IOPS = BANDWIDTH ']' 00:13:11.584 04:08:20 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@395 -- # lower_limit=13500 00:13:11.584 04:08:20 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # upper_limit=16500 00:13:11.584 04:08:20 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 15000 -lt 13500 ']' 00:13:11.584 04:08:20 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 15000 -gt 16500 ']' 00:13:11.584 00:13:11.584 real 0m5.244s 00:13:11.584 user 0m0.114s 00:13:11.584 sys 0m0.039s 00:13:11.584 04:08:20 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:11.584 04:08:20 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:13:11.584 ************************************ 00:13:11.584 END TEST bdev_qos_iops 00:13:11.584 ************************************ 00:13:11.584 04:08:20 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:13:11.584 04:08:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # get_io_result BANDWIDTH Null_1 00:13:11.584 04:08:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:13:11.584 04:08:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:13:11.584 04:08:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:13:11.584 04:08:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:11.584 04:08:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Null_1 00:13:11.584 04:08:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:13:16.857 04:08:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 21913.72 87654.89 0.00 0.00 89088.00 0.00 0.00 ' 00:13:16.857 04:08:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:13:16.857 04:08:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:16.857 04:08:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:13:16.857 04:08:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # iostat_result=89088.00 00:13:16.857 04:08:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 89088 00:13:16.857 04:08:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # bw_limit=89088 00:13:16.857 04:08:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=8 00:13:16.857 04:08:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # '[' 8 -lt 2 ']' 00:13:16.857 04:08:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@431 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 8 Null_1 00:13:16.857 04:08:25 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.858 04:08:25 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:16.858 04:08:25 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.858 04:08:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # run_test bdev_qos_bw run_qos_test 8 BANDWIDTH Null_1 00:13:16.858 04:08:25 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:16.858 04:08:25 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:16.858 04:08:25 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:16.858 ************************************ 00:13:16.858 START TEST bdev_qos_bw 00:13:16.858 ************************************ 00:13:16.858 04:08:25 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1123 -- # run_qos_test 8 BANDWIDTH Null_1 00:13:16.858 04:08:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@388 -- # local qos_limit=8 00:13:16.858 04:08:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:13:16.858 04:08:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Null_1 00:13:16.858 04:08:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:13:16.858 04:08:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:13:16.858 04:08:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:13:16.858 04:08:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:16.858 04:08:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # grep Null_1 00:13:16.858 04:08:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # tail -1 00:13:22.130 04:08:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 2048.22 8192.89 0.00 0.00 8404.00 0.00 0.00 ' 00:13:22.130 04:08:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:13:22.130 04:08:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:22.130 04:08:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:13:22.130 04:08:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # iostat_result=8404.00 00:13:22.130 04:08:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@384 -- # echo 8404 00:13:22.130 04:08:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # qos_result=8404 00:13:22.130 04:08:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:22.130 04:08:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # qos_limit=8192 00:13:22.130 04:08:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@395 -- # lower_limit=7372 00:13:22.130 04:08:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # upper_limit=9011 00:13:22.130 04:08:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 8404 -lt 7372 ']' 00:13:22.130 04:08:30 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 8404 -gt 9011 ']' 00:13:22.130 00:13:22.130 real 0m5.271s 00:13:22.130 user 0m0.111s 00:13:22.131 sys 0m0.043s 00:13:22.131 04:08:30 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:22.131 04:08:30 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:13:22.131 ************************************ 00:13:22.131 END TEST bdev_qos_bw 00:13:22.131 ************************************ 00:13:22.131 04:08:30 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:13:22.131 04:08:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@435 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:13:22.131 04:08:30 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.131 04:08:30 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:22.131 04:08:30 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.131 04:08:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:13:22.131 04:08:30 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:22.131 04:08:30 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:22.131 04:08:30 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:22.131 ************************************ 00:13:22.131 START TEST bdev_qos_ro_bw 00:13:22.131 ************************************ 00:13:22.131 04:08:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1123 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:13:22.131 04:08:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@388 -- # local qos_limit=2 00:13:22.131 04:08:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:13:22.131 04:08:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Malloc_0 00:13:22.131 04:08:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:13:22.131 04:08:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:13:22.131 04:08:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:13:22.131 04:08:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:22.131 04:08:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:13:22.131 04:08:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # tail -1 00:13:27.402 04:08:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 512.63 2050.52 0.00 0.00 2056.00 0.00 0.00 ' 00:13:27.402 04:08:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:13:27.402 04:08:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:27.402 04:08:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:13:27.402 04:08:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # iostat_result=2056.00 00:13:27.402 04:08:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@384 -- # echo 2056 00:13:27.402 04:08:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # qos_result=2056 00:13:27.402 04:08:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:27.402 04:08:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # qos_limit=2048 00:13:27.402 04:08:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@395 -- # lower_limit=1843 00:13:27.402 04:08:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # upper_limit=2252 00:13:27.402 04:08:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2056 -lt 1843 ']' 00:13:27.402 04:08:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2056 -gt 2252 ']' 00:13:27.402 00:13:27.402 real 0m5.176s 00:13:27.402 user 0m0.114s 00:13:27.402 sys 0m0.042s 00:13:27.402 04:08:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:27.402 04:08:35 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:13:27.402 ************************************ 00:13:27.402 END TEST bdev_qos_ro_bw 00:13:27.402 ************************************ 00:13:27.402 04:08:35 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:13:27.402 04:08:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:13:27.402 04:08:35 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.402 04:08:35 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:27.971 04:08:36 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.971 04:08:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_null_delete Null_1 00:13:27.971 04:08:36 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.971 04:08:36 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:28.230 00:13:28.230 Latency(us) 00:13:28.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.230 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:28.230 Malloc_0 : 26.75 20627.62 80.58 0.00 0.00 12290.27 2149.58 503316.48 00:13:28.230 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:28.230 Null_1 : 27.03 20879.68 81.56 0.00 0.00 12225.79 783.16 278501.79 00:13:28.230 =================================================================================================================== 00:13:28.230 Total : 41507.30 162.14 0.00 0.00 12257.67 783.16 503316.48 00:13:28.230 0 00:13:28.230 04:08:36 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.230 04:08:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # killprocess 2600833 00:13:28.230 04:08:36 blockdev_general.bdev_qos -- common/autotest_common.sh@948 -- # '[' -z 2600833 ']' 00:13:28.230 04:08:36 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # kill -0 2600833 00:13:28.230 04:08:36 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # uname 00:13:28.231 04:08:36 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:28.231 04:08:36 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2600833 00:13:28.231 04:08:36 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:28.231 04:08:36 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:28.231 04:08:36 blockdev_general.bdev_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2600833' 00:13:28.231 killing process with pid 2600833 00:13:28.231 04:08:36 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # kill 2600833 00:13:28.231 Received shutdown signal, test time was about 27.100703 seconds 00:13:28.231 00:13:28.231 Latency(us) 00:13:28.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.231 =================================================================================================================== 00:13:28.231 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:28.231 04:08:36 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # wait 2600833 00:13:30.138 04:08:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # trap - SIGINT SIGTERM EXIT 00:13:30.138 00:13:30.138 real 0m30.326s 00:13:30.138 user 0m30.860s 00:13:30.138 sys 0m0.982s 00:13:30.138 04:08:38 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:30.138 04:08:38 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:30.138 ************************************ 00:13:30.138 END TEST bdev_qos 00:13:30.138 ************************************ 00:13:30.138 04:08:38 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:30.138 04:08:38 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:13:30.138 04:08:38 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:30.138 04:08:38 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.138 04:08:38 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:30.138 ************************************ 00:13:30.138 START TEST bdev_qd_sampling 00:13:30.138 ************************************ 00:13:30.138 04:08:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1123 -- # qd_sampling_test_suite '' 00:13:30.138 04:08:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@537 -- # QD_DEV=Malloc_QD 00:13:30.138 04:08:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # QD_PID=2606190 00:13:30.138 04:08:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # echo 'Process bdev QD sampling period testing pid: 2606190' 00:13:30.138 Process bdev QD sampling period testing pid: 2606190 00:13:30.138 04:08:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:13:30.138 04:08:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:13:30.138 04:08:38 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # waitforlisten 2606190 00:13:30.138 04:08:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@829 -- # '[' -z 2606190 ']' 00:13:30.138 04:08:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.138 04:08:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.138 04:08:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.138 04:08:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.138 04:08:38 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:30.138 [2024-07-23 04:08:38.710352] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:30.138 [2024-07-23 04:08:38.710475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2606190 ] 00:13:30.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.138 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:30.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.138 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:30.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.138 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:30.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.138 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:30.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.138 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:30.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.138 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:30.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.138 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:30.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.138 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:30.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.138 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:30.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.138 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:30.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.138 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:30.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.139 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:30.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.139 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:30.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.139 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:30.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.139 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:30.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.139 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:30.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.139 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:30.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.139 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:30.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.139 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:30.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.139 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:30.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.139 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:30.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.139 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:30.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.139 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:30.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.139 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:30.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.139 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:30.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.139 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:30.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.139 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:30.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.139 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:30.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.139 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:30.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.139 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:30.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.139 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:30.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.139 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:30.398 [2024-07-23 04:08:38.936762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:30.658 [2024-07-23 04:08:39.208213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.658 [2024-07-23 04:08:39.208218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.227 04:08:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:31.227 04:08:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@862 -- # return 0 00:13:31.227 04:08:39 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@545 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:13:31.227 04:08:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.227 04:08:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:31.227 Malloc_QD 00:13:31.227 04:08:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.227 04:08:39 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # waitforbdev Malloc_QD 00:13:31.227 04:08:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:13:31.227 04:08:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:31.227 04:08:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local i 00:13:31.227 04:08:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:31.227 04:08:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:31.227 04:08:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:31.227 04:08:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.227 04:08:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:31.227 04:08:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.227 04:08:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:13:31.227 04:08:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.227 04:08:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:31.227 [ 00:13:31.227 { 00:13:31.227 "name": "Malloc_QD", 00:13:31.227 "aliases": [ 00:13:31.227 "bc49f791-99c2-4552-891f-322215870917" 00:13:31.227 ], 00:13:31.227 "product_name": "Malloc disk", 00:13:31.227 "block_size": 512, 00:13:31.227 "num_blocks": 262144, 00:13:31.227 "uuid": "bc49f791-99c2-4552-891f-322215870917", 00:13:31.227 "assigned_rate_limits": { 00:13:31.227 "rw_ios_per_sec": 0, 00:13:31.227 "rw_mbytes_per_sec": 0, 00:13:31.227 "r_mbytes_per_sec": 0, 00:13:31.227 "w_mbytes_per_sec": 0 00:13:31.227 }, 00:13:31.227 "claimed": false, 00:13:31.227 "zoned": false, 00:13:31.227 "supported_io_types": { 00:13:31.227 "read": true, 00:13:31.227 "write": true, 00:13:31.227 "unmap": true, 00:13:31.227 "flush": true, 00:13:31.227 "reset": true, 00:13:31.227 "nvme_admin": false, 00:13:31.227 "nvme_io": false, 00:13:31.227 "nvme_io_md": false, 00:13:31.227 "write_zeroes": true, 00:13:31.227 "zcopy": true, 00:13:31.227 "get_zone_info": false, 00:13:31.227 "zone_management": false, 00:13:31.227 "zone_append": false, 00:13:31.227 "compare": false, 00:13:31.227 "compare_and_write": false, 00:13:31.227 "abort": true, 00:13:31.227 "seek_hole": false, 00:13:31.227 "seek_data": false, 00:13:31.227 "copy": true, 00:13:31.227 "nvme_iov_md": false 00:13:31.227 }, 00:13:31.227 "memory_domains": [ 00:13:31.227 { 00:13:31.227 "dma_device_id": "system", 00:13:31.227 "dma_device_type": 1 00:13:31.227 }, 00:13:31.227 { 00:13:31.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.227 "dma_device_type": 2 00:13:31.227 } 00:13:31.227 ], 00:13:31.227 "driver_specific": {} 00:13:31.227 } 00:13:31.227 ] 00:13:31.227 04:08:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.227 04:08:39 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # return 0 00:13:31.227 04:08:39 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # sleep 2 00:13:31.227 04:08:39 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:31.486 Running I/O for 5 seconds... 00:13:33.392 04:08:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # qd_sampling_function_test Malloc_QD 00:13:33.392 04:08:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@518 -- # local bdev_name=Malloc_QD 00:13:33.392 04:08:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local sampling_period=10 00:13:33.392 04:08:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local iostats 00:13:33.392 04:08:41 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@522 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:13:33.392 04:08:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.392 04:08:41 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:33.392 04:08:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.392 04:08:42 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:13:33.392 04:08:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.392 04:08:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:33.392 04:08:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.392 04:08:42 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # iostats='{ 00:13:33.392 "tick_rate": 2500000000, 00:13:33.392 "ticks": 13751882012767664, 00:13:33.392 "bdevs": [ 00:13:33.392 { 00:13:33.392 "name": "Malloc_QD", 00:13:33.392 "bytes_read": 750825984, 00:13:33.392 "num_read_ops": 183300, 00:13:33.392 "bytes_written": 0, 00:13:33.392 "num_write_ops": 0, 00:13:33.392 "bytes_unmapped": 0, 00:13:33.392 "num_unmap_ops": 0, 00:13:33.392 "bytes_copied": 0, 00:13:33.392 "num_copy_ops": 0, 00:13:33.392 "read_latency_ticks": 2436215533770, 00:13:33.392 "max_read_latency_ticks": 14144190, 00:13:33.392 "min_read_latency_ticks": 486230, 00:13:33.392 "write_latency_ticks": 0, 00:13:33.392 "max_write_latency_ticks": 0, 00:13:33.392 "min_write_latency_ticks": 0, 00:13:33.392 "unmap_latency_ticks": 0, 00:13:33.392 "max_unmap_latency_ticks": 0, 00:13:33.392 "min_unmap_latency_ticks": 0, 00:13:33.392 "copy_latency_ticks": 0, 00:13:33.392 "max_copy_latency_ticks": 0, 00:13:33.392 "min_copy_latency_ticks": 0, 00:13:33.392 "io_error": {}, 00:13:33.392 "queue_depth_polling_period": 10, 00:13:33.392 "queue_depth": 512, 00:13:33.392 "io_time": 30, 00:13:33.392 "weighted_io_time": 15360 00:13:33.392 } 00:13:33.392 ] 00:13:33.392 }' 00:13:33.392 04:08:42 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:13:33.392 04:08:42 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # qd_sampling_period=10 00:13:33.392 04:08:42 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 == null ']' 00:13:33.392 04:08:42 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 -ne 10 ']' 00:13:33.392 04:08:42 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@552 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:13:33.392 04:08:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.392 04:08:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:33.392 00:13:33.392 Latency(us) 00:13:33.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.392 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:33.392 Malloc_QD : 1.99 47960.81 187.35 0.00 0.00 5323.50 1336.93 5662.31 00:13:33.392 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:33.392 Malloc_QD : 1.99 48198.28 188.27 0.00 0.00 5297.67 976.49 5478.81 00:13:33.392 =================================================================================================================== 00:13:33.392 Total : 96159.09 375.62 0.00 0.00 5310.55 976.49 5662.31 00:13:33.652 0 00:13:33.652 04:08:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.652 04:08:42 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # killprocess 2606190 00:13:33.652 04:08:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@948 -- # '[' -z 2606190 ']' 00:13:33.652 04:08:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # kill -0 2606190 00:13:33.652 04:08:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # uname 00:13:33.652 04:08:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:33.652 04:08:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2606190 00:13:33.652 04:08:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:33.652 04:08:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:33.652 04:08:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2606190' 00:13:33.652 killing process with pid 2606190 00:13:33.652 04:08:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # kill 2606190 00:13:33.652 Received shutdown signal, test time was about 2.205629 seconds 00:13:33.652 00:13:33.652 Latency(us) 00:13:33.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.652 =================================================================================================================== 00:13:33.652 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:33.652 04:08:42 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # wait 2606190 00:13:35.558 04:08:44 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # trap - SIGINT SIGTERM EXIT 00:13:35.558 00:13:35.558 real 0m5.502s 00:13:35.558 user 0m9.875s 00:13:35.558 sys 0m0.589s 00:13:35.558 04:08:44 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:35.558 04:08:44 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:35.558 ************************************ 00:13:35.558 END TEST bdev_qd_sampling 00:13:35.558 ************************************ 00:13:35.558 04:08:44 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:35.558 04:08:44 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_error error_test_suite '' 00:13:35.558 04:08:44 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:35.558 04:08:44 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:35.558 04:08:44 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:35.558 ************************************ 00:13:35.558 START TEST bdev_error 00:13:35.558 ************************************ 00:13:35.558 04:08:44 blockdev_general.bdev_error -- common/autotest_common.sh@1123 -- # error_test_suite '' 00:13:35.558 04:08:44 blockdev_general.bdev_error -- bdev/blockdev.sh@465 -- # DEV_1=Dev_1 00:13:35.558 04:08:44 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_2=Dev_2 00:13:35.558 04:08:44 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # ERR_DEV=EE_Dev_1 00:13:35.558 04:08:44 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # ERR_PID=2607019 00:13:35.558 04:08:44 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # echo 'Process error testing pid: 2607019' 00:13:35.558 Process error testing pid: 2607019 00:13:35.558 04:08:44 blockdev_general.bdev_error -- bdev/blockdev.sh@470 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:13:35.558 04:08:44 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # waitforlisten 2607019 00:13:35.558 04:08:44 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 2607019 ']' 00:13:35.558 04:08:44 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.558 04:08:44 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:35.558 04:08:44 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.558 04:08:44 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:35.558 04:08:44 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:35.558 [2024-07-23 04:08:44.303309] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:35.558 [2024-07-23 04:08:44.303432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2607019 ] 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:35.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:35.818 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:35.818 [2024-07-23 04:08:44.517850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.100 [2024-07-23 04:08:44.785312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.682 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:36.682 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:13:36.682 04:08:45 blockdev_general.bdev_error -- bdev/blockdev.sh@475 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:36.682 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.682 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:36.941 Dev_1 00:13:36.941 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.941 04:08:45 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # waitforbdev Dev_1 00:13:36.941 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:13:36.941 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:36.941 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:13:36.941 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:36.941 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:36.941 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:36.941 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.941 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:36.941 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.941 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:36.941 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.941 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:36.941 [ 00:13:36.941 { 00:13:36.941 "name": "Dev_1", 00:13:36.941 "aliases": [ 00:13:36.941 "8fb90006-0fa0-4ebd-b8cd-8d75c587aa29" 00:13:36.941 ], 00:13:36.941 "product_name": "Malloc disk", 00:13:36.941 "block_size": 512, 00:13:36.941 "num_blocks": 262144, 00:13:36.941 "uuid": "8fb90006-0fa0-4ebd-b8cd-8d75c587aa29", 00:13:36.941 "assigned_rate_limits": { 00:13:36.941 "rw_ios_per_sec": 0, 00:13:36.941 "rw_mbytes_per_sec": 0, 00:13:36.941 "r_mbytes_per_sec": 0, 00:13:36.941 "w_mbytes_per_sec": 0 00:13:36.941 }, 00:13:36.941 "claimed": false, 00:13:36.941 "zoned": false, 00:13:36.941 "supported_io_types": { 00:13:36.941 "read": true, 00:13:36.941 "write": true, 00:13:36.941 "unmap": true, 00:13:36.941 "flush": true, 00:13:36.941 "reset": true, 00:13:36.941 "nvme_admin": false, 00:13:36.941 "nvme_io": false, 00:13:36.941 "nvme_io_md": false, 00:13:36.941 "write_zeroes": true, 00:13:36.941 "zcopy": true, 00:13:36.941 "get_zone_info": false, 00:13:36.941 "zone_management": false, 00:13:36.941 "zone_append": false, 00:13:36.941 "compare": false, 00:13:36.941 "compare_and_write": false, 00:13:36.941 "abort": true, 00:13:36.941 "seek_hole": false, 00:13:36.941 "seek_data": false, 00:13:36.941 "copy": true, 00:13:36.941 "nvme_iov_md": false 00:13:36.941 }, 00:13:36.941 "memory_domains": [ 00:13:36.941 { 00:13:36.941 "dma_device_id": "system", 00:13:36.941 "dma_device_type": 1 00:13:36.941 }, 00:13:36.941 { 00:13:36.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.941 "dma_device_type": 2 00:13:36.941 } 00:13:36.941 ], 00:13:36.941 "driver_specific": {} 00:13:36.941 } 00:13:36.941 ] 00:13:36.941 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.941 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:13:36.941 04:08:45 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_error_create Dev_1 00:13:36.941 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.941 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:36.941 true 00:13:36.941 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.941 04:08:45 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:36.942 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.942 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:37.200 Dev_2 00:13:37.200 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.200 04:08:45 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # waitforbdev Dev_2 00:13:37.200 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:13:37.200 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:37.200 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:13:37.200 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:37.200 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:37.200 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:37.200 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.200 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:37.200 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.200 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:37.200 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.200 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:37.200 [ 00:13:37.200 { 00:13:37.200 "name": "Dev_2", 00:13:37.200 "aliases": [ 00:13:37.200 "c64579ae-9ccd-4680-953d-7eed8bae868f" 00:13:37.200 ], 00:13:37.200 "product_name": "Malloc disk", 00:13:37.200 "block_size": 512, 00:13:37.200 "num_blocks": 262144, 00:13:37.200 "uuid": "c64579ae-9ccd-4680-953d-7eed8bae868f", 00:13:37.200 "assigned_rate_limits": { 00:13:37.200 "rw_ios_per_sec": 0, 00:13:37.200 "rw_mbytes_per_sec": 0, 00:13:37.200 "r_mbytes_per_sec": 0, 00:13:37.200 "w_mbytes_per_sec": 0 00:13:37.200 }, 00:13:37.200 "claimed": false, 00:13:37.200 "zoned": false, 00:13:37.200 "supported_io_types": { 00:13:37.200 "read": true, 00:13:37.200 "write": true, 00:13:37.200 "unmap": true, 00:13:37.200 "flush": true, 00:13:37.200 "reset": true, 00:13:37.200 "nvme_admin": false, 00:13:37.200 "nvme_io": false, 00:13:37.200 "nvme_io_md": false, 00:13:37.200 "write_zeroes": true, 00:13:37.200 "zcopy": true, 00:13:37.200 "get_zone_info": false, 00:13:37.200 "zone_management": false, 00:13:37.200 "zone_append": false, 00:13:37.200 "compare": false, 00:13:37.200 "compare_and_write": false, 00:13:37.200 "abort": true, 00:13:37.200 "seek_hole": false, 00:13:37.200 "seek_data": false, 00:13:37.200 "copy": true, 00:13:37.200 "nvme_iov_md": false 00:13:37.200 }, 00:13:37.200 "memory_domains": [ 00:13:37.200 { 00:13:37.200 "dma_device_id": "system", 00:13:37.200 "dma_device_type": 1 00:13:37.200 }, 00:13:37.200 { 00:13:37.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.200 "dma_device_type": 2 00:13:37.200 } 00:13:37.200 ], 00:13:37.200 "driver_specific": {} 00:13:37.200 } 00:13:37.200 ] 00:13:37.200 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.200 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:13:37.200 04:08:45 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:37.200 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.200 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:37.200 04:08:45 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.200 04:08:45 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # sleep 1 00:13:37.200 04:08:45 blockdev_general.bdev_error -- bdev/blockdev.sh@482 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:37.200 Running I/O for 5 seconds... 00:13:38.136 04:08:46 blockdev_general.bdev_error -- bdev/blockdev.sh@486 -- # kill -0 2607019 00:13:38.136 04:08:46 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # echo 'Process is existed as continue on error is set. Pid: 2607019' 00:13:38.136 Process is existed as continue on error is set. Pid: 2607019 00:13:38.136 04:08:46 blockdev_general.bdev_error -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:13:38.136 04:08:46 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.136 04:08:46 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:38.136 04:08:46 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.136 04:08:46 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_malloc_delete Dev_1 00:13:38.136 04:08:46 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.136 04:08:46 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:38.136 Timeout while waiting for response: 00:13:38.136 00:13:38.136 00:13:38.395 04:08:46 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.395 04:08:46 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # sleep 5 00:13:42.589 00:13:42.589 Latency(us) 00:13:42.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.589 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:42.589 EE_Dev_1 : 0.90 36058.74 140.85 5.54 0.00 440.11 143.36 740.56 00:13:42.589 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:42.589 Dev_2 : 5.00 75476.06 294.83 0.00 0.00 208.53 69.22 159383.55 00:13:42.589 =================================================================================================================== 00:13:42.589 Total : 111534.80 435.68 5.54 0.00 226.90 69.22 159383.55 00:13:43.526 04:08:51 blockdev_general.bdev_error -- bdev/blockdev.sh@498 -- # killprocess 2607019 00:13:43.526 04:08:51 blockdev_general.bdev_error -- common/autotest_common.sh@948 -- # '[' -z 2607019 ']' 00:13:43.526 04:08:51 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # kill -0 2607019 00:13:43.526 04:08:51 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # uname 00:13:43.526 04:08:51 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:43.526 04:08:51 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2607019 00:13:43.526 04:08:52 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:43.526 04:08:52 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:43.526 04:08:52 blockdev_general.bdev_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2607019' 00:13:43.526 killing process with pid 2607019 00:13:43.526 04:08:52 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # kill 2607019 00:13:43.526 Received shutdown signal, test time was about 5.000000 seconds 00:13:43.526 00:13:43.526 Latency(us) 00:13:43.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.526 =================================================================================================================== 00:13:43.526 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:43.526 04:08:52 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # wait 2607019 00:13:45.430 04:08:54 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # ERR_PID=2608867 00:13:45.430 04:08:54 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # echo 'Process error testing pid: 2608867' 00:13:45.430 Process error testing pid: 2608867 00:13:45.430 04:08:54 blockdev_general.bdev_error -- bdev/blockdev.sh@501 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:13:45.430 04:08:54 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # waitforlisten 2608867 00:13:45.430 04:08:54 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 2608867 ']' 00:13:45.430 04:08:54 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.430 04:08:54 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:45.430 04:08:54 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.430 04:08:54 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:45.430 04:08:54 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:45.689 [2024-07-23 04:08:54.297972] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:45.689 [2024-07-23 04:08:54.298098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2608867 ] 00:13:45.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.689 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:45.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.689 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:45.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.689 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:45.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.689 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:45.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.689 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:45.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.689 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:45.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.689 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:45.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.689 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:45.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.689 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:45.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.689 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:45.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.689 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:45.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.689 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:45.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.689 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:45.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.689 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:45.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.689 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:45.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.689 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:45.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.689 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:45.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.689 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:45.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.689 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:45.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.689 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:45.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.689 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:45.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.689 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:45.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.689 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:45.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.690 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:45.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.690 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:45.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.690 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:45.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.690 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:45.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.690 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:45.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.690 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:45.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.690 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:45.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.690 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:45.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:45.690 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:45.949 [2024-07-23 04:08:54.511559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.208 [2024-07-23 04:08:54.787724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.777 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:46.777 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:13:46.777 04:08:55 blockdev_general.bdev_error -- bdev/blockdev.sh@506 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:46.777 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.777 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:46.777 Dev_1 00:13:46.777 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.777 04:08:55 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # waitforbdev Dev_1 00:13:46.777 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:13:46.777 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:46.777 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:13:46.777 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:46.777 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:46.777 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:46.777 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.777 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:46.777 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.777 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:46.777 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.777 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:46.777 [ 00:13:46.777 { 00:13:46.777 "name": "Dev_1", 00:13:46.777 "aliases": [ 00:13:46.777 "b1a7f01d-6c26-409d-900c-2668b0250843" 00:13:46.777 ], 00:13:46.777 "product_name": "Malloc disk", 00:13:46.777 "block_size": 512, 00:13:46.777 "num_blocks": 262144, 00:13:46.777 "uuid": "b1a7f01d-6c26-409d-900c-2668b0250843", 00:13:46.777 "assigned_rate_limits": { 00:13:46.777 "rw_ios_per_sec": 0, 00:13:46.777 "rw_mbytes_per_sec": 0, 00:13:46.777 "r_mbytes_per_sec": 0, 00:13:46.777 "w_mbytes_per_sec": 0 00:13:46.777 }, 00:13:46.777 "claimed": false, 00:13:46.777 "zoned": false, 00:13:46.777 "supported_io_types": { 00:13:46.777 "read": true, 00:13:46.777 "write": true, 00:13:46.777 "unmap": true, 00:13:46.777 "flush": true, 00:13:46.777 "reset": true, 00:13:46.777 "nvme_admin": false, 00:13:46.777 "nvme_io": false, 00:13:46.777 "nvme_io_md": false, 00:13:46.777 "write_zeroes": true, 00:13:46.777 "zcopy": true, 00:13:46.777 "get_zone_info": false, 00:13:46.777 "zone_management": false, 00:13:46.777 "zone_append": false, 00:13:46.777 "compare": false, 00:13:46.777 "compare_and_write": false, 00:13:46.777 "abort": true, 00:13:46.777 "seek_hole": false, 00:13:46.777 "seek_data": false, 00:13:46.777 "copy": true, 00:13:46.777 "nvme_iov_md": false 00:13:46.777 }, 00:13:46.777 "memory_domains": [ 00:13:46.777 { 00:13:46.777 "dma_device_id": "system", 00:13:46.777 "dma_device_type": 1 00:13:46.777 }, 00:13:46.777 { 00:13:46.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.777 "dma_device_type": 2 00:13:46.777 } 00:13:46.777 ], 00:13:46.777 "driver_specific": {} 00:13:46.777 } 00:13:46.777 ] 00:13:46.777 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.777 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:13:46.777 04:08:55 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_error_create Dev_1 00:13:46.777 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.777 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:47.037 true 00:13:47.037 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.037 04:08:55 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:47.037 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.037 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:47.037 Dev_2 00:13:47.037 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.037 04:08:55 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # waitforbdev Dev_2 00:13:47.037 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:13:47.037 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:47.038 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:13:47.038 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:47.038 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:47.038 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:47.038 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.038 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:47.038 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.038 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:47.038 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.038 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:47.038 [ 00:13:47.038 { 00:13:47.038 "name": "Dev_2", 00:13:47.038 "aliases": [ 00:13:47.038 "d904bb67-9044-4937-a1e6-50a265fc2657" 00:13:47.038 ], 00:13:47.038 "product_name": "Malloc disk", 00:13:47.038 "block_size": 512, 00:13:47.038 "num_blocks": 262144, 00:13:47.038 "uuid": "d904bb67-9044-4937-a1e6-50a265fc2657", 00:13:47.038 "assigned_rate_limits": { 00:13:47.038 "rw_ios_per_sec": 0, 00:13:47.038 "rw_mbytes_per_sec": 0, 00:13:47.038 "r_mbytes_per_sec": 0, 00:13:47.038 "w_mbytes_per_sec": 0 00:13:47.038 }, 00:13:47.038 "claimed": false, 00:13:47.038 "zoned": false, 00:13:47.038 "supported_io_types": { 00:13:47.038 "read": true, 00:13:47.038 "write": true, 00:13:47.038 "unmap": true, 00:13:47.038 "flush": true, 00:13:47.038 "reset": true, 00:13:47.038 "nvme_admin": false, 00:13:47.038 "nvme_io": false, 00:13:47.038 "nvme_io_md": false, 00:13:47.038 "write_zeroes": true, 00:13:47.038 "zcopy": true, 00:13:47.038 "get_zone_info": false, 00:13:47.038 "zone_management": false, 00:13:47.038 "zone_append": false, 00:13:47.038 "compare": false, 00:13:47.038 "compare_and_write": false, 00:13:47.038 "abort": true, 00:13:47.038 "seek_hole": false, 00:13:47.038 "seek_data": false, 00:13:47.038 "copy": true, 00:13:47.297 "nvme_iov_md": false 00:13:47.297 }, 00:13:47.297 "memory_domains": [ 00:13:47.297 { 00:13:47.297 "dma_device_id": "system", 00:13:47.297 "dma_device_type": 1 00:13:47.297 }, 00:13:47.297 { 00:13:47.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.297 "dma_device_type": 2 00:13:47.297 } 00:13:47.297 ], 00:13:47.297 "driver_specific": {} 00:13:47.297 } 00:13:47.297 ] 00:13:47.297 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.297 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:13:47.297 04:08:55 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:47.297 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.297 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:47.297 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.297 04:08:55 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # NOT wait 2608867 00:13:47.297 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:13:47.297 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2608867 00:13:47.297 04:08:55 blockdev_general.bdev_error -- bdev/blockdev.sh@513 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:47.297 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:13:47.297 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:47.297 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:13:47.297 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:47.297 04:08:55 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 2608867 00:13:47.297 Running I/O for 5 seconds... 00:13:47.297 task offset: 73104 on job bdev=EE_Dev_1 fails 00:13:47.297 00:13:47.297 Latency(us) 00:13:47.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.297 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:47.297 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:13:47.297 EE_Dev_1 : 0.00 27534.42 107.56 6257.82 0.00 388.62 141.72 691.40 00:13:47.297 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:47.297 Dev_2 : 0.00 17857.14 69.75 0.00 0.00 650.50 136.81 1199.31 00:13:47.297 =================================================================================================================== 00:13:47.297 Total : 45391.56 177.31 6257.82 0.00 530.65 136.81 1199.31 00:13:47.297 [2024-07-23 04:08:55.950146] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:47.297 request: 00:13:47.297 { 00:13:47.297 "method": "perform_tests", 00:13:47.297 "req_id": 1 00:13:47.297 } 00:13:47.297 Got JSON-RPC error response 00:13:47.297 response: 00:13:47.297 { 00:13:47.297 "code": -32603, 00:13:47.297 "message": "bdevperf failed with error Operation not permitted" 00:13:47.297 } 00:13:49.835 04:08:58 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:13:49.835 04:08:58 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:49.835 04:08:58 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:13:49.835 04:08:58 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:13:49.835 04:08:58 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:13:49.835 04:08:58 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:49.835 00:13:49.835 real 0m14.075s 00:13:49.835 user 0m14.104s 00:13:49.835 sys 0m1.166s 00:13:49.835 04:08:58 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:49.835 04:08:58 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:49.835 ************************************ 00:13:49.835 END TEST bdev_error 00:13:49.835 ************************************ 00:13:49.835 04:08:58 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:49.835 04:08:58 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_stat stat_test_suite '' 00:13:49.835 04:08:58 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:49.835 04:08:58 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.835 04:08:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:49.835 ************************************ 00:13:49.835 START TEST bdev_stat 00:13:49.835 ************************************ 00:13:49.835 04:08:58 blockdev_general.bdev_stat -- common/autotest_common.sh@1123 -- # stat_test_suite '' 00:13:49.835 04:08:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@591 -- # STAT_DEV=Malloc_STAT 00:13:49.835 04:08:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # STAT_PID=2609441 00:13:49.835 04:08:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # echo 'Process Bdev IO statistics testing pid: 2609441' 00:13:49.835 Process Bdev IO statistics testing pid: 2609441 00:13:49.835 04:08:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@594 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:13:49.835 04:08:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:13:49.835 04:08:58 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # waitforlisten 2609441 00:13:49.835 04:08:58 blockdev_general.bdev_stat -- common/autotest_common.sh@829 -- # '[' -z 2609441 ']' 00:13:49.835 04:08:58 blockdev_general.bdev_stat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.835 04:08:58 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:49.835 04:08:58 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.835 04:08:58 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:49.835 04:08:58 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:49.835 [2024-07-23 04:08:58.459517] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:49.835 [2024-07-23 04:08:58.459640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2609441 ] 00:13:49.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.835 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:49.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.835 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:49.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.835 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:49.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.835 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:49.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.835 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:49.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.835 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:49.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.835 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:49.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.835 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:49.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.835 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:49.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.835 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:49.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.835 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:49.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.836 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:49.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.836 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:49.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.836 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:49.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.836 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:49.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.836 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:49.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.836 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:49.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.836 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:49.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.836 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:49.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.836 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:49.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.836 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:49.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.836 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:49.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.836 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:49.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.836 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:49.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.836 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:49.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.836 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:49.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.836 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:49.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.836 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:49.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.836 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:49.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.836 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:49.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.836 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:49.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:49.836 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:50.095 [2024-07-23 04:08:58.684824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:50.354 [2024-07-23 04:08:58.965017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.354 [2024-07-23 04:08:58.965025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.923 04:08:59 blockdev_general.bdev_stat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:50.923 04:08:59 blockdev_general.bdev_stat -- common/autotest_common.sh@862 -- # return 0 00:13:50.923 04:08:59 blockdev_general.bdev_stat -- bdev/blockdev.sh@600 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:13:50.923 04:08:59 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.923 04:08:59 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:51.182 Malloc_STAT 00:13:51.182 04:08:59 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.182 04:08:59 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # waitforbdev Malloc_STAT 00:13:51.182 04:08:59 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:13:51.182 04:08:59 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:51.182 04:08:59 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local i 00:13:51.182 04:08:59 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:51.182 04:08:59 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:51.182 04:08:59 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:51.182 04:08:59 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.182 04:08:59 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:51.182 04:08:59 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.182 04:08:59 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:13:51.182 04:08:59 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.182 04:08:59 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:51.182 [ 00:13:51.182 { 00:13:51.182 "name": "Malloc_STAT", 00:13:51.182 "aliases": [ 00:13:51.182 "6d9cd689-8c69-4410-9ed1-ec953e339a4d" 00:13:51.182 ], 00:13:51.182 "product_name": "Malloc disk", 00:13:51.182 "block_size": 512, 00:13:51.182 "num_blocks": 262144, 00:13:51.182 "uuid": "6d9cd689-8c69-4410-9ed1-ec953e339a4d", 00:13:51.182 "assigned_rate_limits": { 00:13:51.182 "rw_ios_per_sec": 0, 00:13:51.182 "rw_mbytes_per_sec": 0, 00:13:51.182 "r_mbytes_per_sec": 0, 00:13:51.182 "w_mbytes_per_sec": 0 00:13:51.182 }, 00:13:51.182 "claimed": false, 00:13:51.182 "zoned": false, 00:13:51.182 "supported_io_types": { 00:13:51.182 "read": true, 00:13:51.182 "write": true, 00:13:51.182 "unmap": true, 00:13:51.182 "flush": true, 00:13:51.182 "reset": true, 00:13:51.182 "nvme_admin": false, 00:13:51.182 "nvme_io": false, 00:13:51.182 "nvme_io_md": false, 00:13:51.182 "write_zeroes": true, 00:13:51.182 "zcopy": true, 00:13:51.182 "get_zone_info": false, 00:13:51.182 "zone_management": false, 00:13:51.182 "zone_append": false, 00:13:51.182 "compare": false, 00:13:51.182 "compare_and_write": false, 00:13:51.182 "abort": true, 00:13:51.182 "seek_hole": false, 00:13:51.182 "seek_data": false, 00:13:51.182 "copy": true, 00:13:51.182 "nvme_iov_md": false 00:13:51.182 }, 00:13:51.182 "memory_domains": [ 00:13:51.182 { 00:13:51.182 "dma_device_id": "system", 00:13:51.182 "dma_device_type": 1 00:13:51.182 }, 00:13:51.182 { 00:13:51.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.182 "dma_device_type": 2 00:13:51.182 } 00:13:51.182 ], 00:13:51.182 "driver_specific": {} 00:13:51.182 } 00:13:51.182 ] 00:13:51.182 04:08:59 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.182 04:08:59 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # return 0 00:13:51.182 04:08:59 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # sleep 2 00:13:51.182 04:08:59 blockdev_general.bdev_stat -- bdev/blockdev.sh@603 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:51.182 Running I/O for 10 seconds... 00:13:53.089 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # stat_function_test Malloc_STAT 00:13:53.089 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@558 -- # local bdev_name=Malloc_STAT 00:13:53.089 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local iostats 00:13:53.089 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local io_count1 00:13:53.089 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count2 00:13:53.089 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local iostats_per_channel 00:13:53.089 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local io_count_per_channel1 00:13:53.089 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel2 00:13:53.089 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel_all=0 00:13:53.089 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:53.089 04:09:01 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.089 04:09:01 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:53.089 04:09:01 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.089 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # iostats='{ 00:13:53.089 "tick_rate": 2500000000, 00:13:53.089 "ticks": 13751931228120424, 00:13:53.089 "bdevs": [ 00:13:53.089 { 00:13:53.089 "name": "Malloc_STAT", 00:13:53.089 "bytes_read": 741388800, 00:13:53.089 "num_read_ops": 180996, 00:13:53.089 "bytes_written": 0, 00:13:53.089 "num_write_ops": 0, 00:13:53.089 "bytes_unmapped": 0, 00:13:53.089 "num_unmap_ops": 0, 00:13:53.089 "bytes_copied": 0, 00:13:53.089 "num_copy_ops": 0, 00:13:53.089 "read_latency_ticks": 2418028935952, 00:13:53.089 "max_read_latency_ticks": 15772314, 00:13:53.089 "min_read_latency_ticks": 529066, 00:13:53.089 "write_latency_ticks": 0, 00:13:53.089 "max_write_latency_ticks": 0, 00:13:53.089 "min_write_latency_ticks": 0, 00:13:53.089 "unmap_latency_ticks": 0, 00:13:53.089 "max_unmap_latency_ticks": 0, 00:13:53.089 "min_unmap_latency_ticks": 0, 00:13:53.089 "copy_latency_ticks": 0, 00:13:53.089 "max_copy_latency_ticks": 0, 00:13:53.089 "min_copy_latency_ticks": 0, 00:13:53.089 "io_error": {} 00:13:53.089 } 00:13:53.089 ] 00:13:53.089 }' 00:13:53.089 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # jq -r '.bdevs[0].num_read_ops' 00:13:53.089 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # io_count1=180996 00:13:53.089 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:13:53.089 04:09:01 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.089 04:09:01 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:53.089 04:09:01 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.089 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # iostats_per_channel='{ 00:13:53.089 "tick_rate": 2500000000, 00:13:53.089 "ticks": 13751931404251710, 00:13:53.089 "name": "Malloc_STAT", 00:13:53.089 "channels": [ 00:13:53.089 { 00:13:53.089 "thread_id": 2, 00:13:53.089 "bytes_read": 382730240, 00:13:53.089 "num_read_ops": 93440, 00:13:53.089 "bytes_written": 0, 00:13:53.089 "num_write_ops": 0, 00:13:53.089 "bytes_unmapped": 0, 00:13:53.089 "num_unmap_ops": 0, 00:13:53.089 "bytes_copied": 0, 00:13:53.089 "num_copy_ops": 0, 00:13:53.089 "read_latency_ticks": 1252971363902, 00:13:53.089 "max_read_latency_ticks": 15772314, 00:13:53.089 "min_read_latency_ticks": 10018756, 00:13:53.089 "write_latency_ticks": 0, 00:13:53.089 "max_write_latency_ticks": 0, 00:13:53.089 "min_write_latency_ticks": 0, 00:13:53.089 "unmap_latency_ticks": 0, 00:13:53.089 "max_unmap_latency_ticks": 0, 00:13:53.089 "min_unmap_latency_ticks": 0, 00:13:53.089 "copy_latency_ticks": 0, 00:13:53.089 "max_copy_latency_ticks": 0, 00:13:53.089 "min_copy_latency_ticks": 0 00:13:53.089 }, 00:13:53.089 { 00:13:53.089 "thread_id": 3, 00:13:53.089 "bytes_read": 385875968, 00:13:53.090 "num_read_ops": 94208, 00:13:53.090 "bytes_written": 0, 00:13:53.090 "num_write_ops": 0, 00:13:53.090 "bytes_unmapped": 0, 00:13:53.090 "num_unmap_ops": 0, 00:13:53.090 "bytes_copied": 0, 00:13:53.090 "num_copy_ops": 0, 00:13:53.090 "read_latency_ticks": 1254212283354, 00:13:53.090 "max_read_latency_ticks": 13722570, 00:13:53.090 "min_read_latency_ticks": 10038132, 00:13:53.090 "write_latency_ticks": 0, 00:13:53.090 "max_write_latency_ticks": 0, 00:13:53.090 "min_write_latency_ticks": 0, 00:13:53.090 "unmap_latency_ticks": 0, 00:13:53.090 "max_unmap_latency_ticks": 0, 00:13:53.090 "min_unmap_latency_ticks": 0, 00:13:53.090 "copy_latency_ticks": 0, 00:13:53.090 "max_copy_latency_ticks": 0, 00:13:53.090 "min_copy_latency_ticks": 0 00:13:53.090 } 00:13:53.090 ] 00:13:53.090 }' 00:13:53.090 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # jq -r '.channels[0].num_read_ops' 00:13:53.349 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # io_count_per_channel1=93440 00:13:53.349 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel_all=93440 00:13:53.349 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # jq -r '.channels[1].num_read_ops' 00:13:53.349 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel2=94208 00:13:53.349 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel_all=187648 00:13:53.349 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:53.349 04:09:01 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.349 04:09:01 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:53.349 04:09:01 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.349 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # iostats='{ 00:13:53.349 "tick_rate": 2500000000, 00:13:53.349 "ticks": 13751931701102784, 00:13:53.349 "bdevs": [ 00:13:53.349 { 00:13:53.349 "name": "Malloc_STAT", 00:13:53.349 "bytes_read": 814789120, 00:13:53.349 "num_read_ops": 198916, 00:13:53.349 "bytes_written": 0, 00:13:53.349 "num_write_ops": 0, 00:13:53.349 "bytes_unmapped": 0, 00:13:53.349 "num_unmap_ops": 0, 00:13:53.349 "bytes_copied": 0, 00:13:53.349 "num_copy_ops": 0, 00:13:53.349 "read_latency_ticks": 2658240552930, 00:13:53.349 "max_read_latency_ticks": 15772314, 00:13:53.349 "min_read_latency_ticks": 529066, 00:13:53.349 "write_latency_ticks": 0, 00:13:53.349 "max_write_latency_ticks": 0, 00:13:53.349 "min_write_latency_ticks": 0, 00:13:53.349 "unmap_latency_ticks": 0, 00:13:53.349 "max_unmap_latency_ticks": 0, 00:13:53.349 "min_unmap_latency_ticks": 0, 00:13:53.349 "copy_latency_ticks": 0, 00:13:53.349 "max_copy_latency_ticks": 0, 00:13:53.349 "min_copy_latency_ticks": 0, 00:13:53.349 "io_error": {} 00:13:53.349 } 00:13:53.349 ] 00:13:53.349 }' 00:13:53.349 04:09:01 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # jq -r '.bdevs[0].num_read_ops' 00:13:53.349 04:09:02 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # io_count2=198916 00:13:53.349 04:09:02 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 187648 -lt 180996 ']' 00:13:53.349 04:09:02 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 187648 -gt 198916 ']' 00:13:53.349 04:09:02 blockdev_general.bdev_stat -- bdev/blockdev.sh@607 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:13:53.349 04:09:02 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.349 04:09:02 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:53.349 00:13:53.349 Latency(us) 00:13:53.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.349 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:53.349 Malloc_STAT : 2.16 47629.01 186.05 0.00 0.00 5361.22 1297.61 6317.67 00:13:53.349 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:53.349 Malloc_STAT : 2.16 48038.25 187.65 0.00 0.00 5315.83 963.38 5505.02 00:13:53.349 =================================================================================================================== 00:13:53.349 Total : 95667.26 373.70 0.00 0.00 5338.42 963.38 6317.67 00:13:53.609 0 00:13:53.609 04:09:02 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.609 04:09:02 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # killprocess 2609441 00:13:53.609 04:09:02 blockdev_general.bdev_stat -- common/autotest_common.sh@948 -- # '[' -z 2609441 ']' 00:13:53.609 04:09:02 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # kill -0 2609441 00:13:53.609 04:09:02 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # uname 00:13:53.609 04:09:02 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:53.609 04:09:02 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2609441 00:13:53.609 04:09:02 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:53.609 04:09:02 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:53.609 04:09:02 blockdev_general.bdev_stat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2609441' 00:13:53.609 killing process with pid 2609441 00:13:53.609 04:09:02 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # kill 2609441 00:13:53.609 Received shutdown signal, test time was about 2.382959 seconds 00:13:53.609 00:13:53.609 Latency(us) 00:13:53.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.609 =================================================================================================================== 00:13:53.609 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:53.609 04:09:02 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # wait 2609441 00:13:55.514 04:09:03 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # trap - SIGINT SIGTERM EXIT 00:13:55.514 00:13:55.514 real 0m5.636s 00:13:55.514 user 0m10.258s 00:13:55.514 sys 0m0.640s 00:13:55.514 04:09:03 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:55.514 04:09:03 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:55.514 ************************************ 00:13:55.514 END TEST bdev_stat 00:13:55.514 ************************************ 00:13:55.514 04:09:04 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:55.514 04:09:04 blockdev_general -- bdev/blockdev.sh@793 -- # [[ bdev == gpt ]] 00:13:55.514 04:09:04 blockdev_general -- bdev/blockdev.sh@797 -- # [[ bdev == crypto_sw ]] 00:13:55.514 04:09:04 blockdev_general -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:55.514 04:09:04 blockdev_general -- bdev/blockdev.sh@810 -- # cleanup 00:13:55.514 04:09:04 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:13:55.514 04:09:04 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:13:55.514 04:09:04 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:13:55.514 04:09:04 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:13:55.514 04:09:04 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:13:55.514 04:09:04 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:13:55.514 00:13:55.514 real 2m46.427s 00:13:55.514 user 8m30.718s 00:13:55.514 sys 0m26.159s 00:13:55.514 04:09:04 blockdev_general -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:55.514 04:09:04 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:55.514 ************************************ 00:13:55.514 END TEST blockdev_general 00:13:55.514 ************************************ 00:13:55.515 04:09:04 -- common/autotest_common.sh@1142 -- # return 0 00:13:55.515 04:09:04 -- spdk/autotest.sh@190 -- # run_test bdev_raid /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:13:55.515 04:09:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:55.515 04:09:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:55.515 04:09:04 -- common/autotest_common.sh@10 -- # set +x 00:13:55.515 ************************************ 00:13:55.515 START TEST bdev_raid 00:13:55.515 ************************************ 00:13:55.515 04:09:04 bdev_raid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:13:55.515 * Looking for test storage... 00:13:55.515 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:13:55.515 04:09:04 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:13:55.515 04:09:04 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:13:55.515 04:09:04 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:13:55.515 04:09:04 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:13:55.515 04:09:04 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:13:55.515 04:09:04 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:13:55.515 04:09:04 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:13:55.515 04:09:04 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' Linux = Linux ']' 00:13:55.515 04:09:04 bdev_raid -- bdev/bdev_raid.sh@856 -- # modprobe -n nbd 00:13:55.515 04:09:04 bdev_raid -- bdev/bdev_raid.sh@857 -- # has_nbd=true 00:13:55.515 04:09:04 bdev_raid -- bdev/bdev_raid.sh@858 -- # modprobe nbd 00:13:55.515 04:09:04 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:13:55.515 04:09:04 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:55.515 04:09:04 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:55.515 04:09:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:55.774 ************************************ 00:13:55.774 START TEST raid_function_test_raid0 00:13:55.774 ************************************ 00:13:55.774 04:09:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1123 -- # raid_function_test raid0 00:13:55.774 04:09:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:13:55.774 04:09:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:13:55.774 04:09:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:13:55.774 04:09:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=2610840 00:13:55.774 04:09:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 2610840' 00:13:55.774 Process raid pid: 2610840 00:13:55.774 04:09:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:55.774 04:09:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 2610840 /var/tmp/spdk-raid.sock 00:13:55.774 04:09:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@829 -- # '[' -z 2610840 ']' 00:13:55.774 04:09:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:55.774 04:09:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:55.774 04:09:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:55.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:55.774 04:09:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:55.774 04:09:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:55.774 [2024-07-23 04:09:04.421181] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:55.774 [2024-07-23 04:09:04.421296] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:55.774 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:55.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:55.774 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:55.774 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:55.774 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:56.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.033 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:56.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.033 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:56.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.033 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:56.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.033 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:56.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.033 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:56.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.033 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:56.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.033 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:56.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.033 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:56.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.033 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:56.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.033 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:56.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.033 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:56.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.033 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:56.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.034 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:56.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.034 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:56.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.034 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:56.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.034 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:56.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.034 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:56.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.034 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:56.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.034 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:56.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.034 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:56.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.034 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:56.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.034 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:56.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.034 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:56.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.034 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:56.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.034 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:56.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.034 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:56.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.034 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:56.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.034 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:56.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.034 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:56.034 [2024-07-23 04:09:04.651086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.294 [2024-07-23 04:09:04.942695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.584 [2024-07-23 04:09:05.289174] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:56.584 [2024-07-23 04:09:05.289214] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:56.842 04:09:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:56.842 04:09:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # return 0 00:13:56.842 04:09:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:13:56.842 04:09:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:13:56.842 04:09:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:13:56.842 04:09:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:13:56.842 04:09:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:57.101 [2024-07-23 04:09:05.817751] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:57.101 [2024-07-23 04:09:05.820082] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:57.101 [2024-07-23 04:09:05.820168] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:13:57.101 [2024-07-23 04:09:05.820192] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:57.101 [2024-07-23 04:09:05.820527] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:13:57.101 [2024-07-23 04:09:05.820767] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:13:57.101 [2024-07-23 04:09:05.820782] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x61600003ff80 00:13:57.101 [2024-07-23 04:09:05.820998] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.101 Base_1 00:13:57.101 Base_2 00:13:57.101 04:09:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:13:57.101 04:09:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:57.101 04:09:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:13:57.360 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:13:57.360 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:13:57.360 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:57.360 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:57.360 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:57.360 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:57.360 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:57.360 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:57.360 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:13:57.360 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:57.360 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:57.360 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:57.929 [2024-07-23 04:09:06.555780] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:13:57.929 /dev/nbd0 00:13:57.929 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:57.929 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:57.929 04:09:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:13:57.929 04:09:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # local i 00:13:57.929 04:09:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:57.929 04:09:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:57.929 04:09:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:13:57.929 04:09:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # break 00:13:57.929 04:09:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:57.929 04:09:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:57.929 04:09:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.929 1+0 records in 00:13:57.929 1+0 records out 00:13:57.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259307 s, 15.8 MB/s 00:13:57.929 04:09:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:13:57.929 04:09:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # size=4096 00:13:57.929 04:09:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:13:57.929 04:09:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:57.929 04:09:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # return 0 00:13:57.929 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:57.929 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:57.929 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:57.929 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:57.929 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:58.189 { 00:13:58.189 "nbd_device": "/dev/nbd0", 00:13:58.189 "bdev_name": "raid" 00:13:58.189 } 00:13:58.189 ]' 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:58.189 { 00:13:58.189 "nbd_device": "/dev/nbd0", 00:13:58.189 "bdev_name": "raid" 00:13:58.189 } 00:13:58.189 ]' 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:13:58.189 4096+0 records in 00:13:58.189 4096+0 records out 00:13:58.189 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0282634 s, 74.2 MB/s 00:13:58.189 04:09:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:58.448 4096+0 records in 00:13:58.448 4096+0 records out 00:13:58.448 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.21858 s, 9.6 MB/s 00:13:58.448 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:13:58.448 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:58.448 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:13:58.448 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:58.448 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:13:58.448 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:13:58.448 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:58.448 128+0 records in 00:13:58.448 128+0 records out 00:13:58.448 65536 bytes (66 kB, 64 KiB) copied, 0.00033295 s, 197 MB/s 00:13:58.448 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:58.448 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:58.448 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:58.448 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:58.448 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:58.448 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:13:58.448 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:13:58.448 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:58.707 2035+0 records in 00:13:58.707 2035+0 records out 00:13:58.707 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0108818 s, 95.7 MB/s 00:13:58.707 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:58.707 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:58.707 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:58.707 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:58.707 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:58.707 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:13:58.707 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:13:58.707 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:58.707 456+0 records in 00:13:58.707 456+0 records out 00:13:58.707 233472 bytes (233 kB, 228 KiB) copied, 0.00270955 s, 86.2 MB/s 00:13:58.708 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:58.708 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:58.708 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:58.708 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:58.708 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:58.708 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:13:58.708 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:58.708 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:58.708 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:58.708 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:58.708 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:13:58.708 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:58.708 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:58.967 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:58.967 [2024-07-23 04:09:07.526235] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.967 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:58.967 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:58.967 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.967 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.967 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:58.967 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:13:58.967 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.967 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:58.967 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:58.967 04:09:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:59.535 04:09:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:59.535 04:09:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:59.535 04:09:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:59.535 04:09:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:59.535 04:09:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:59.535 04:09:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:59.535 04:09:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:13:59.535 04:09:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:13:59.535 04:09:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:59.535 04:09:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:13:59.535 04:09:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:13:59.535 04:09:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 2610840 00:13:59.535 04:09:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@948 -- # '[' -z 2610840 ']' 00:13:59.535 04:09:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # kill -0 2610840 00:13:59.535 04:09:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # uname 00:13:59.535 04:09:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:59.535 04:09:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2610840 00:13:59.535 04:09:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:59.535 04:09:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:59.535 04:09:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2610840' 00:13:59.535 killing process with pid 2610840 00:13:59.535 04:09:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@967 -- # kill 2610840 00:13:59.535 [2024-07-23 04:09:08.154515] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:59.535 [2024-07-23 04:09:08.154639] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.535 04:09:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # wait 2610840 00:13:59.535 [2024-07-23 04:09:08.154704] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:59.535 [2024-07-23 04:09:08.154728] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name raid, state offline 00:13:59.794 [2024-07-23 04:09:08.360179] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:01.702 04:09:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:14:01.702 00:14:01.702 real 0m5.836s 00:14:01.702 user 0m7.140s 00:14:01.702 sys 0m1.379s 00:14:01.702 04:09:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:01.702 04:09:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:14:01.702 ************************************ 00:14:01.702 END TEST raid_function_test_raid0 00:14:01.702 ************************************ 00:14:01.702 04:09:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:01.702 04:09:10 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_function_test_concat raid_function_test concat 00:14:01.702 04:09:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:01.702 04:09:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:01.702 04:09:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:01.702 ************************************ 00:14:01.702 START TEST raid_function_test_concat 00:14:01.702 ************************************ 00:14:01.702 04:09:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1123 -- # raid_function_test concat 00:14:01.702 04:09:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:14:01.702 04:09:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:14:01.702 04:09:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:14:01.702 04:09:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=2612243 00:14:01.702 04:09:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 2612243' 00:14:01.702 Process raid pid: 2612243 00:14:01.702 04:09:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:01.702 04:09:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 2612243 /var/tmp/spdk-raid.sock 00:14:01.702 04:09:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@829 -- # '[' -z 2612243 ']' 00:14:01.702 04:09:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:01.702 04:09:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.702 04:09:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:01.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:01.702 04:09:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.702 04:09:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:14:01.702 [2024-07-23 04:09:10.343733] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:01.702 [2024-07-23 04:09:10.343847] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.702 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:01.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.702 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:01.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.702 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:01.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.702 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:01.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.702 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:01.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.702 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:01.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.702 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:01.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.702 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:01.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.702 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:01.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.702 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:01.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.702 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:01.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.702 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:01.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.702 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:01.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.702 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:01.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.702 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:01.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.702 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:01.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.702 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:01.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.962 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:01.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.962 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:01.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.962 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:01.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.962 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:01.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.962 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:01.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.962 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:01.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.962 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:01.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.962 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:01.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.962 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:01.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.962 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:01.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.962 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:01.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.962 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:01.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.962 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:01.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.962 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:01.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:01.962 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:01.962 [2024-07-23 04:09:10.573037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.222 [2024-07-23 04:09:10.864240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.481 [2024-07-23 04:09:11.196391] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.481 [2024-07-23 04:09:11.196427] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.740 04:09:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.740 04:09:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # return 0 00:14:02.740 04:09:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:14:02.740 04:09:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:14:02.740 04:09:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:14:02.740 04:09:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:14:02.740 04:09:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:02.999 [2024-07-23 04:09:11.693106] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:02.999 [2024-07-23 04:09:11.695433] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:02.999 [2024-07-23 04:09:11.695521] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:14:02.999 [2024-07-23 04:09:11.695542] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:02.999 [2024-07-23 04:09:11.695872] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:14:02.999 [2024-07-23 04:09:11.696092] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:14:02.999 [2024-07-23 04:09:11.696107] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x61600003ff80 00:14:02.999 [2024-07-23 04:09:11.696327] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.999 Base_1 00:14:02.999 Base_2 00:14:02.999 04:09:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:14:02.999 04:09:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:02.999 04:09:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:14:03.258 04:09:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:14:03.258 04:09:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:14:03.258 04:09:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:03.258 04:09:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:03.258 04:09:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:03.258 04:09:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:03.258 04:09:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:03.258 04:09:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:03.258 04:09:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:14:03.258 04:09:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:03.258 04:09:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:03.258 04:09:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:03.517 [2024-07-23 04:09:12.094199] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:14:03.517 /dev/nbd0 00:14:03.517 04:09:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:03.517 04:09:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:03.517 04:09:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:14:03.517 04:09:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # local i 00:14:03.517 04:09:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:03.517 04:09:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:03.517 04:09:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:14:03.517 04:09:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # break 00:14:03.517 04:09:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:03.517 04:09:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:03.517 04:09:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.517 1+0 records in 00:14:03.517 1+0 records out 00:14:03.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248354 s, 16.5 MB/s 00:14:03.517 04:09:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:03.517 04:09:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # size=4096 00:14:03.517 04:09:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:03.517 04:09:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:03.517 04:09:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # return 0 00:14:03.517 04:09:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.517 04:09:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:03.517 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:03.517 04:09:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:03.517 04:09:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:03.777 { 00:14:03.777 "nbd_device": "/dev/nbd0", 00:14:03.777 "bdev_name": "raid" 00:14:03.777 } 00:14:03.777 ]' 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:03.777 { 00:14:03.777 "nbd_device": "/dev/nbd0", 00:14:03.777 "bdev_name": "raid" 00:14:03.777 } 00:14:03.777 ]' 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:14:03.777 4096+0 records in 00:14:03.777 4096+0 records out 00:14:03.777 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0276098 s, 76.0 MB/s 00:14:03.777 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:04.036 4096+0 records in 00:14:04.036 4096+0 records out 00:14:04.036 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.208217 s, 10.1 MB/s 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:04.036 128+0 records in 00:14:04.036 128+0 records out 00:14:04.036 65536 bytes (66 kB, 64 KiB) copied, 0.00082865 s, 79.1 MB/s 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:04.036 2035+0 records in 00:14:04.036 2035+0 records out 00:14:04.036 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00468126 s, 223 MB/s 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:04.036 456+0 records in 00:14:04.036 456+0 records out 00:14:04.036 233472 bytes (233 kB, 228 KiB) copied, 0.00268572 s, 86.9 MB/s 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.036 04:09:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:04.296 04:09:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:04.296 [2024-07-23 04:09:13.034135] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.296 04:09:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:04.296 04:09:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:04.296 04:09:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.296 04:09:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.296 04:09:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:04.296 04:09:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:14:04.296 04:09:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.296 04:09:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:04.296 04:09:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:04.296 04:09:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:04.555 04:09:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:04.555 04:09:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:04.555 04:09:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:04.555 04:09:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:04.555 04:09:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:14:04.555 04:09:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:04.555 04:09:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:14:04.555 04:09:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:14:04.555 04:09:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:14:04.555 04:09:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:14:04.555 04:09:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:14:04.555 04:09:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 2612243 00:14:04.555 04:09:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@948 -- # '[' -z 2612243 ']' 00:14:04.555 04:09:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # kill -0 2612243 00:14:04.555 04:09:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # uname 00:14:04.555 04:09:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:04.813 04:09:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2612243 00:14:04.813 04:09:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:04.813 04:09:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:04.813 04:09:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2612243' 00:14:04.813 killing process with pid 2612243 00:14:04.813 04:09:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@967 -- # kill 2612243 00:14:04.813 [2024-07-23 04:09:13.387977] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:04.813 [2024-07-23 04:09:13.388098] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.813 04:09:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # wait 2612243 00:14:04.813 [2024-07-23 04:09:13.388171] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.813 [2024-07-23 04:09:13.388192] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name raid, state offline 00:14:04.813 [2024-07-23 04:09:13.593097] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:06.718 04:09:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:14:06.718 00:14:06.718 real 0m5.117s 00:14:06.718 user 0m5.939s 00:14:06.718 sys 0m1.299s 00:14:06.718 04:09:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:06.718 04:09:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:14:06.718 ************************************ 00:14:06.718 END TEST raid_function_test_concat 00:14:06.718 ************************************ 00:14:06.718 04:09:15 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:06.718 04:09:15 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:14:06.718 04:09:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:06.718 04:09:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:06.718 04:09:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:06.718 ************************************ 00:14:06.718 START TEST raid0_resize_test 00:14:06.718 ************************************ 00:14:06.718 04:09:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1123 -- # raid0_resize_test 00:14:06.718 04:09:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:14:06.718 04:09:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:14:06.718 04:09:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:14:06.718 04:09:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:14:06.718 04:09:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:14:06.718 04:09:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:14:06.718 04:09:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=2613127 00:14:06.718 04:09:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 2613127' 00:14:06.718 Process raid pid: 2613127 00:14:06.718 04:09:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:06.718 04:09:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 2613127 /var/tmp/spdk-raid.sock 00:14:06.718 04:09:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@829 -- # '[' -z 2613127 ']' 00:14:06.718 04:09:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:06.718 04:09:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.718 04:09:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:06.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:06.718 04:09:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.718 04:09:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.978 [2024-07-23 04:09:15.545338] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:06.978 [2024-07-23 04:09:15.545448] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:06.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.978 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:07.237 [2024-07-23 04:09:15.773732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.496 [2024-07-23 04:09:16.041996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.756 [2024-07-23 04:09:16.381170] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.756 [2024-07-23 04:09:16.381210] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:08.015 04:09:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.015 04:09:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # return 0 00:14:08.015 04:09:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:14:08.015 Base_1 00:14:08.015 04:09:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:14:08.274 Base_2 00:14:08.274 04:09:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:14:08.533 [2024-07-23 04:09:17.060150] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:08.533 [2024-07-23 04:09:17.062423] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:08.533 [2024-07-23 04:09:17.062485] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:14:08.533 [2024-07-23 04:09:17.062507] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:08.533 [2024-07-23 04:09:17.062824] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000103d0 00:14:08.533 [2024-07-23 04:09:17.063014] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:14:08.533 [2024-07-23 04:09:17.063028] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x61600003ff80 00:14:08.533 [2024-07-23 04:09:17.063230] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.533 04:09:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:14:08.533 [2024-07-23 04:09:17.288697] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:08.533 [2024-07-23 04:09:17.288732] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:14:08.533 true 00:14:08.533 04:09:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:08.533 04:09:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:14:08.793 [2024-07-23 04:09:17.521576] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:08.793 04:09:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:14:08.793 04:09:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:14:08.793 04:09:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:14:08.793 04:09:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:14:09.052 [2024-07-23 04:09:17.753987] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:09.052 [2024-07-23 04:09:17.754023] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:14:09.052 [2024-07-23 04:09:17.754067] bdev_raid.c:2315:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:14:09.052 true 00:14:09.052 04:09:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:09.052 04:09:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:14:09.311 [2024-07-23 04:09:17.978806] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:09.311 04:09:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:14:09.311 04:09:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:14:09.311 04:09:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:14:09.311 04:09:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 2613127 00:14:09.311 04:09:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@948 -- # '[' -z 2613127 ']' 00:14:09.311 04:09:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # kill -0 2613127 00:14:09.311 04:09:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # uname 00:14:09.311 04:09:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:09.311 04:09:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2613127 00:14:09.311 04:09:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:09.311 04:09:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:09.311 04:09:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2613127' 00:14:09.311 killing process with pid 2613127 00:14:09.311 04:09:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # kill 2613127 00:14:09.311 [2024-07-23 04:09:18.059948] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:09.311 [2024-07-23 04:09:18.060058] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:09.311 04:09:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # wait 2613127 00:14:09.311 [2024-07-23 04:09:18.060124] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:09.311 [2024-07-23 04:09:18.060150] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Raid, state offline 00:14:09.311 [2024-07-23 04:09:18.073974] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:11.218 04:09:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:14:11.218 00:14:11.218 real 0m4.369s 00:14:11.218 user 0m5.519s 00:14:11.218 sys 0m0.780s 00:14:11.218 04:09:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:11.218 04:09:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.218 ************************************ 00:14:11.218 END TEST raid0_resize_test 00:14:11.218 ************************************ 00:14:11.218 04:09:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:11.218 04:09:19 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:14:11.218 04:09:19 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:14:11.218 04:09:19 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:14:11.218 04:09:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:11.218 04:09:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:11.218 04:09:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:11.218 ************************************ 00:14:11.218 START TEST raid_state_function_test 00:14:11.218 ************************************ 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 false 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2613947 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2613947' 00:14:11.218 Process raid pid: 2613947 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2613947 /var/tmp/spdk-raid.sock 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2613947 ']' 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:11.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:11.218 04:09:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.477 [2024-07-23 04:09:20.005357] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:11.478 [2024-07-23 04:09:20.005472] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:11.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:11.478 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:11.478 [2024-07-23 04:09:20.232237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.045 [2024-07-23 04:09:20.525324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.304 [2024-07-23 04:09:20.882155] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.304 [2024-07-23 04:09:20.882190] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.304 04:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:12.304 04:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:14:12.304 04:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:12.563 [2024-07-23 04:09:21.271993] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:12.563 [2024-07-23 04:09:21.272046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:12.563 [2024-07-23 04:09:21.272061] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:12.563 [2024-07-23 04:09:21.272077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:12.563 04:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:12.563 04:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:12.563 04:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:12.563 04:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:12.563 04:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:12.563 04:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:12.563 04:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:12.563 04:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:12.563 04:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:12.563 04:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:12.563 04:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.563 04:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.822 04:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:12.822 "name": "Existed_Raid", 00:14:12.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.822 "strip_size_kb": 64, 00:14:12.822 "state": "configuring", 00:14:12.822 "raid_level": "raid0", 00:14:12.822 "superblock": false, 00:14:12.822 "num_base_bdevs": 2, 00:14:12.822 "num_base_bdevs_discovered": 0, 00:14:12.822 "num_base_bdevs_operational": 2, 00:14:12.822 "base_bdevs_list": [ 00:14:12.822 { 00:14:12.822 "name": "BaseBdev1", 00:14:12.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.822 "is_configured": false, 00:14:12.822 "data_offset": 0, 00:14:12.822 "data_size": 0 00:14:12.822 }, 00:14:12.822 { 00:14:12.822 "name": "BaseBdev2", 00:14:12.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.822 "is_configured": false, 00:14:12.822 "data_offset": 0, 00:14:12.822 "data_size": 0 00:14:12.822 } 00:14:12.822 ] 00:14:12.822 }' 00:14:12.822 04:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:12.822 04:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.468 04:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:13.728 [2024-07-23 04:09:22.270534] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:13.728 [2024-07-23 04:09:22.270575] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:14:13.728 04:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:13.728 [2024-07-23 04:09:22.499195] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:13.728 [2024-07-23 04:09:22.499237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:13.728 [2024-07-23 04:09:22.499251] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:13.728 [2024-07-23 04:09:22.499267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:13.987 04:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:14.246 [2024-07-23 04:09:22.786020] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:14.246 BaseBdev1 00:14:14.246 04:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:14.246 04:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:14.246 04:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:14.246 04:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:14.246 04:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:14.246 04:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:14.246 04:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:14.246 04:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:14.505 [ 00:14:14.505 { 00:14:14.505 "name": "BaseBdev1", 00:14:14.505 "aliases": [ 00:14:14.505 "5595c7d3-034b-41f5-8b27-acd63ab36d67" 00:14:14.505 ], 00:14:14.505 "product_name": "Malloc disk", 00:14:14.505 "block_size": 512, 00:14:14.505 "num_blocks": 65536, 00:14:14.505 "uuid": "5595c7d3-034b-41f5-8b27-acd63ab36d67", 00:14:14.505 "assigned_rate_limits": { 00:14:14.505 "rw_ios_per_sec": 0, 00:14:14.505 "rw_mbytes_per_sec": 0, 00:14:14.505 "r_mbytes_per_sec": 0, 00:14:14.505 "w_mbytes_per_sec": 0 00:14:14.505 }, 00:14:14.505 "claimed": true, 00:14:14.505 "claim_type": "exclusive_write", 00:14:14.505 "zoned": false, 00:14:14.505 "supported_io_types": { 00:14:14.505 "read": true, 00:14:14.505 "write": true, 00:14:14.505 "unmap": true, 00:14:14.505 "flush": true, 00:14:14.505 "reset": true, 00:14:14.505 "nvme_admin": false, 00:14:14.505 "nvme_io": false, 00:14:14.505 "nvme_io_md": false, 00:14:14.505 "write_zeroes": true, 00:14:14.505 "zcopy": true, 00:14:14.505 "get_zone_info": false, 00:14:14.505 "zone_management": false, 00:14:14.505 "zone_append": false, 00:14:14.505 "compare": false, 00:14:14.505 "compare_and_write": false, 00:14:14.505 "abort": true, 00:14:14.505 "seek_hole": false, 00:14:14.505 "seek_data": false, 00:14:14.505 "copy": true, 00:14:14.505 "nvme_iov_md": false 00:14:14.505 }, 00:14:14.505 "memory_domains": [ 00:14:14.505 { 00:14:14.505 "dma_device_id": "system", 00:14:14.505 "dma_device_type": 1 00:14:14.505 }, 00:14:14.505 { 00:14:14.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.505 "dma_device_type": 2 00:14:14.505 } 00:14:14.505 ], 00:14:14.505 "driver_specific": {} 00:14:14.505 } 00:14:14.505 ] 00:14:14.505 04:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:14.505 04:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:14.505 04:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:14.505 04:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:14.505 04:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:14.505 04:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:14.505 04:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:14.505 04:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:14.505 04:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:14.505 04:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:14.505 04:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:14.505 04:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.505 04:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.763 04:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:14.763 "name": "Existed_Raid", 00:14:14.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.763 "strip_size_kb": 64, 00:14:14.763 "state": "configuring", 00:14:14.763 "raid_level": "raid0", 00:14:14.763 "superblock": false, 00:14:14.763 "num_base_bdevs": 2, 00:14:14.763 "num_base_bdevs_discovered": 1, 00:14:14.763 "num_base_bdevs_operational": 2, 00:14:14.763 "base_bdevs_list": [ 00:14:14.763 { 00:14:14.763 "name": "BaseBdev1", 00:14:14.763 "uuid": "5595c7d3-034b-41f5-8b27-acd63ab36d67", 00:14:14.763 "is_configured": true, 00:14:14.763 "data_offset": 0, 00:14:14.763 "data_size": 65536 00:14:14.763 }, 00:14:14.763 { 00:14:14.763 "name": "BaseBdev2", 00:14:14.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.763 "is_configured": false, 00:14:14.763 "data_offset": 0, 00:14:14.763 "data_size": 0 00:14:14.763 } 00:14:14.763 ] 00:14:14.763 }' 00:14:14.763 04:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:14.763 04:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.330 04:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:15.589 [2024-07-23 04:09:24.250026] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:15.589 [2024-07-23 04:09:24.250078] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:14:15.589 04:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:15.848 [2024-07-23 04:09:24.478728] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.848 [2024-07-23 04:09:24.481024] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:15.848 [2024-07-23 04:09:24.481067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:15.848 04:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:15.848 04:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:15.848 04:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:15.848 04:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:15.848 04:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:15.848 04:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:15.848 04:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:15.848 04:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:15.848 04:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:15.848 04:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:15.848 04:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:15.848 04:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:15.848 04:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.848 04:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.107 04:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:16.107 "name": "Existed_Raid", 00:14:16.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.107 "strip_size_kb": 64, 00:14:16.107 "state": "configuring", 00:14:16.107 "raid_level": "raid0", 00:14:16.107 "superblock": false, 00:14:16.107 "num_base_bdevs": 2, 00:14:16.107 "num_base_bdevs_discovered": 1, 00:14:16.107 "num_base_bdevs_operational": 2, 00:14:16.107 "base_bdevs_list": [ 00:14:16.107 { 00:14:16.107 "name": "BaseBdev1", 00:14:16.107 "uuid": "5595c7d3-034b-41f5-8b27-acd63ab36d67", 00:14:16.107 "is_configured": true, 00:14:16.107 "data_offset": 0, 00:14:16.107 "data_size": 65536 00:14:16.107 }, 00:14:16.107 { 00:14:16.107 "name": "BaseBdev2", 00:14:16.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.107 "is_configured": false, 00:14:16.107 "data_offset": 0, 00:14:16.107 "data_size": 0 00:14:16.107 } 00:14:16.107 ] 00:14:16.107 }' 00:14:16.107 04:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:16.107 04:09:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.674 04:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:16.674 [2024-07-23 04:09:25.448646] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:16.674 [2024-07-23 04:09:25.448694] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:14:16.674 [2024-07-23 04:09:25.448708] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:16.674 [2024-07-23 04:09:25.449040] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:14:16.674 [2024-07-23 04:09:25.449280] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:14:16.674 [2024-07-23 04:09:25.449298] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:14:16.674 [2024-07-23 04:09:25.449614] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.674 BaseBdev2 00:14:16.932 04:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:16.932 04:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:16.933 04:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:16.933 04:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:16.933 04:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:16.933 04:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:16.933 04:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:16.933 04:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:17.191 [ 00:14:17.191 { 00:14:17.191 "name": "BaseBdev2", 00:14:17.191 "aliases": [ 00:14:17.191 "e5722843-bd77-4ae6-bb7b-c4318133a2e0" 00:14:17.191 ], 00:14:17.191 "product_name": "Malloc disk", 00:14:17.191 "block_size": 512, 00:14:17.191 "num_blocks": 65536, 00:14:17.191 "uuid": "e5722843-bd77-4ae6-bb7b-c4318133a2e0", 00:14:17.191 "assigned_rate_limits": { 00:14:17.191 "rw_ios_per_sec": 0, 00:14:17.191 "rw_mbytes_per_sec": 0, 00:14:17.191 "r_mbytes_per_sec": 0, 00:14:17.191 "w_mbytes_per_sec": 0 00:14:17.191 }, 00:14:17.191 "claimed": true, 00:14:17.191 "claim_type": "exclusive_write", 00:14:17.191 "zoned": false, 00:14:17.191 "supported_io_types": { 00:14:17.191 "read": true, 00:14:17.191 "write": true, 00:14:17.191 "unmap": true, 00:14:17.191 "flush": true, 00:14:17.191 "reset": true, 00:14:17.191 "nvme_admin": false, 00:14:17.191 "nvme_io": false, 00:14:17.191 "nvme_io_md": false, 00:14:17.191 "write_zeroes": true, 00:14:17.191 "zcopy": true, 00:14:17.191 "get_zone_info": false, 00:14:17.191 "zone_management": false, 00:14:17.191 "zone_append": false, 00:14:17.191 "compare": false, 00:14:17.191 "compare_and_write": false, 00:14:17.191 "abort": true, 00:14:17.191 "seek_hole": false, 00:14:17.191 "seek_data": false, 00:14:17.191 "copy": true, 00:14:17.191 "nvme_iov_md": false 00:14:17.191 }, 00:14:17.191 "memory_domains": [ 00:14:17.191 { 00:14:17.191 "dma_device_id": "system", 00:14:17.191 "dma_device_type": 1 00:14:17.191 }, 00:14:17.191 { 00:14:17.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.191 "dma_device_type": 2 00:14:17.191 } 00:14:17.191 ], 00:14:17.191 "driver_specific": {} 00:14:17.191 } 00:14:17.191 ] 00:14:17.191 04:09:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:17.191 04:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:17.191 04:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:17.191 04:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:17.191 04:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:17.191 04:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:17.191 04:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:17.191 04:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:17.191 04:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:17.191 04:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:17.191 04:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:17.191 04:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:17.191 04:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:17.191 04:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.191 04:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.450 04:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:17.450 "name": "Existed_Raid", 00:14:17.450 "uuid": "8ae77c20-a40c-4137-9997-a981605cd7ac", 00:14:17.450 "strip_size_kb": 64, 00:14:17.450 "state": "online", 00:14:17.450 "raid_level": "raid0", 00:14:17.450 "superblock": false, 00:14:17.450 "num_base_bdevs": 2, 00:14:17.450 "num_base_bdevs_discovered": 2, 00:14:17.450 "num_base_bdevs_operational": 2, 00:14:17.450 "base_bdevs_list": [ 00:14:17.450 { 00:14:17.450 "name": "BaseBdev1", 00:14:17.450 "uuid": "5595c7d3-034b-41f5-8b27-acd63ab36d67", 00:14:17.450 "is_configured": true, 00:14:17.450 "data_offset": 0, 00:14:17.450 "data_size": 65536 00:14:17.450 }, 00:14:17.450 { 00:14:17.450 "name": "BaseBdev2", 00:14:17.450 "uuid": "e5722843-bd77-4ae6-bb7b-c4318133a2e0", 00:14:17.450 "is_configured": true, 00:14:17.450 "data_offset": 0, 00:14:17.450 "data_size": 65536 00:14:17.450 } 00:14:17.450 ] 00:14:17.450 }' 00:14:17.450 04:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:17.450 04:09:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.017 04:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:18.017 04:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:18.017 04:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:18.017 04:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:18.017 04:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:18.017 04:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:18.017 04:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:18.017 04:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:18.276 [2024-07-23 04:09:26.945271] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:18.276 04:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:18.276 "name": "Existed_Raid", 00:14:18.276 "aliases": [ 00:14:18.276 "8ae77c20-a40c-4137-9997-a981605cd7ac" 00:14:18.276 ], 00:14:18.276 "product_name": "Raid Volume", 00:14:18.276 "block_size": 512, 00:14:18.276 "num_blocks": 131072, 00:14:18.276 "uuid": "8ae77c20-a40c-4137-9997-a981605cd7ac", 00:14:18.276 "assigned_rate_limits": { 00:14:18.276 "rw_ios_per_sec": 0, 00:14:18.276 "rw_mbytes_per_sec": 0, 00:14:18.276 "r_mbytes_per_sec": 0, 00:14:18.276 "w_mbytes_per_sec": 0 00:14:18.276 }, 00:14:18.276 "claimed": false, 00:14:18.276 "zoned": false, 00:14:18.276 "supported_io_types": { 00:14:18.276 "read": true, 00:14:18.276 "write": true, 00:14:18.276 "unmap": true, 00:14:18.276 "flush": true, 00:14:18.276 "reset": true, 00:14:18.276 "nvme_admin": false, 00:14:18.276 "nvme_io": false, 00:14:18.276 "nvme_io_md": false, 00:14:18.276 "write_zeroes": true, 00:14:18.276 "zcopy": false, 00:14:18.276 "get_zone_info": false, 00:14:18.276 "zone_management": false, 00:14:18.276 "zone_append": false, 00:14:18.276 "compare": false, 00:14:18.276 "compare_and_write": false, 00:14:18.276 "abort": false, 00:14:18.276 "seek_hole": false, 00:14:18.276 "seek_data": false, 00:14:18.276 "copy": false, 00:14:18.276 "nvme_iov_md": false 00:14:18.276 }, 00:14:18.276 "memory_domains": [ 00:14:18.276 { 00:14:18.276 "dma_device_id": "system", 00:14:18.276 "dma_device_type": 1 00:14:18.276 }, 00:14:18.276 { 00:14:18.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.276 "dma_device_type": 2 00:14:18.276 }, 00:14:18.276 { 00:14:18.276 "dma_device_id": "system", 00:14:18.276 "dma_device_type": 1 00:14:18.276 }, 00:14:18.276 { 00:14:18.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.276 "dma_device_type": 2 00:14:18.276 } 00:14:18.276 ], 00:14:18.276 "driver_specific": { 00:14:18.276 "raid": { 00:14:18.276 "uuid": "8ae77c20-a40c-4137-9997-a981605cd7ac", 00:14:18.276 "strip_size_kb": 64, 00:14:18.276 "state": "online", 00:14:18.276 "raid_level": "raid0", 00:14:18.276 "superblock": false, 00:14:18.276 "num_base_bdevs": 2, 00:14:18.276 "num_base_bdevs_discovered": 2, 00:14:18.276 "num_base_bdevs_operational": 2, 00:14:18.276 "base_bdevs_list": [ 00:14:18.276 { 00:14:18.276 "name": "BaseBdev1", 00:14:18.276 "uuid": "5595c7d3-034b-41f5-8b27-acd63ab36d67", 00:14:18.276 "is_configured": true, 00:14:18.276 "data_offset": 0, 00:14:18.276 "data_size": 65536 00:14:18.276 }, 00:14:18.276 { 00:14:18.276 "name": "BaseBdev2", 00:14:18.276 "uuid": "e5722843-bd77-4ae6-bb7b-c4318133a2e0", 00:14:18.276 "is_configured": true, 00:14:18.276 "data_offset": 0, 00:14:18.276 "data_size": 65536 00:14:18.276 } 00:14:18.276 ] 00:14:18.276 } 00:14:18.276 } 00:14:18.276 }' 00:14:18.277 04:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:18.277 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:18.277 BaseBdev2' 00:14:18.277 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:18.277 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:18.277 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:18.535 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:18.535 "name": "BaseBdev1", 00:14:18.535 "aliases": [ 00:14:18.535 "5595c7d3-034b-41f5-8b27-acd63ab36d67" 00:14:18.535 ], 00:14:18.535 "product_name": "Malloc disk", 00:14:18.535 "block_size": 512, 00:14:18.535 "num_blocks": 65536, 00:14:18.535 "uuid": "5595c7d3-034b-41f5-8b27-acd63ab36d67", 00:14:18.535 "assigned_rate_limits": { 00:14:18.535 "rw_ios_per_sec": 0, 00:14:18.535 "rw_mbytes_per_sec": 0, 00:14:18.535 "r_mbytes_per_sec": 0, 00:14:18.535 "w_mbytes_per_sec": 0 00:14:18.535 }, 00:14:18.535 "claimed": true, 00:14:18.535 "claim_type": "exclusive_write", 00:14:18.535 "zoned": false, 00:14:18.535 "supported_io_types": { 00:14:18.535 "read": true, 00:14:18.535 "write": true, 00:14:18.535 "unmap": true, 00:14:18.535 "flush": true, 00:14:18.535 "reset": true, 00:14:18.535 "nvme_admin": false, 00:14:18.535 "nvme_io": false, 00:14:18.535 "nvme_io_md": false, 00:14:18.535 "write_zeroes": true, 00:14:18.535 "zcopy": true, 00:14:18.535 "get_zone_info": false, 00:14:18.535 "zone_management": false, 00:14:18.535 "zone_append": false, 00:14:18.535 "compare": false, 00:14:18.535 "compare_and_write": false, 00:14:18.535 "abort": true, 00:14:18.535 "seek_hole": false, 00:14:18.535 "seek_data": false, 00:14:18.535 "copy": true, 00:14:18.535 "nvme_iov_md": false 00:14:18.535 }, 00:14:18.535 "memory_domains": [ 00:14:18.535 { 00:14:18.535 "dma_device_id": "system", 00:14:18.535 "dma_device_type": 1 00:14:18.535 }, 00:14:18.535 { 00:14:18.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.535 "dma_device_type": 2 00:14:18.535 } 00:14:18.535 ], 00:14:18.535 "driver_specific": {} 00:14:18.535 }' 00:14:18.536 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:18.536 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:18.795 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:18.795 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:18.795 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:18.795 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:18.795 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:18.795 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:18.795 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:18.795 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:18.795 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:19.054 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:19.054 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:19.054 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:19.054 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:19.054 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:19.054 "name": "BaseBdev2", 00:14:19.054 "aliases": [ 00:14:19.054 "e5722843-bd77-4ae6-bb7b-c4318133a2e0" 00:14:19.054 ], 00:14:19.054 "product_name": "Malloc disk", 00:14:19.054 "block_size": 512, 00:14:19.054 "num_blocks": 65536, 00:14:19.054 "uuid": "e5722843-bd77-4ae6-bb7b-c4318133a2e0", 00:14:19.054 "assigned_rate_limits": { 00:14:19.054 "rw_ios_per_sec": 0, 00:14:19.054 "rw_mbytes_per_sec": 0, 00:14:19.054 "r_mbytes_per_sec": 0, 00:14:19.054 "w_mbytes_per_sec": 0 00:14:19.054 }, 00:14:19.054 "claimed": true, 00:14:19.054 "claim_type": "exclusive_write", 00:14:19.054 "zoned": false, 00:14:19.054 "supported_io_types": { 00:14:19.054 "read": true, 00:14:19.054 "write": true, 00:14:19.054 "unmap": true, 00:14:19.054 "flush": true, 00:14:19.054 "reset": true, 00:14:19.054 "nvme_admin": false, 00:14:19.054 "nvme_io": false, 00:14:19.054 "nvme_io_md": false, 00:14:19.054 "write_zeroes": true, 00:14:19.054 "zcopy": true, 00:14:19.054 "get_zone_info": false, 00:14:19.054 "zone_management": false, 00:14:19.054 "zone_append": false, 00:14:19.054 "compare": false, 00:14:19.054 "compare_and_write": false, 00:14:19.054 "abort": true, 00:14:19.054 "seek_hole": false, 00:14:19.054 "seek_data": false, 00:14:19.054 "copy": true, 00:14:19.054 "nvme_iov_md": false 00:14:19.054 }, 00:14:19.054 "memory_domains": [ 00:14:19.054 { 00:14:19.054 "dma_device_id": "system", 00:14:19.054 "dma_device_type": 1 00:14:19.054 }, 00:14:19.054 { 00:14:19.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.054 "dma_device_type": 2 00:14:19.054 } 00:14:19.054 ], 00:14:19.054 "driver_specific": {} 00:14:19.054 }' 00:14:19.054 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:19.313 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:19.313 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:19.313 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:19.313 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:19.313 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:19.313 04:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:19.313 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:19.313 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:19.313 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:19.573 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:19.573 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:19.573 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:19.832 [2024-07-23 04:09:28.372834] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:19.832 [2024-07-23 04:09:28.372870] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:19.832 [2024-07-23 04:09:28.372928] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:19.832 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:19.832 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:19.832 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:19.832 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:19.832 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:19.832 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:19.832 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:19.832 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:19.832 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:19.832 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:19.832 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:19.832 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:19.832 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:19.832 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:19.832 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:19.832 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.832 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.092 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:20.092 "name": "Existed_Raid", 00:14:20.092 "uuid": "8ae77c20-a40c-4137-9997-a981605cd7ac", 00:14:20.092 "strip_size_kb": 64, 00:14:20.092 "state": "offline", 00:14:20.092 "raid_level": "raid0", 00:14:20.092 "superblock": false, 00:14:20.092 "num_base_bdevs": 2, 00:14:20.092 "num_base_bdevs_discovered": 1, 00:14:20.092 "num_base_bdevs_operational": 1, 00:14:20.092 "base_bdevs_list": [ 00:14:20.092 { 00:14:20.092 "name": null, 00:14:20.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.092 "is_configured": false, 00:14:20.092 "data_offset": 0, 00:14:20.092 "data_size": 65536 00:14:20.092 }, 00:14:20.092 { 00:14:20.092 "name": "BaseBdev2", 00:14:20.092 "uuid": "e5722843-bd77-4ae6-bb7b-c4318133a2e0", 00:14:20.092 "is_configured": true, 00:14:20.092 "data_offset": 0, 00:14:20.092 "data_size": 65536 00:14:20.092 } 00:14:20.092 ] 00:14:20.092 }' 00:14:20.092 04:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:20.092 04:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.660 04:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:20.660 04:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:20.660 04:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.660 04:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:20.919 04:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:20.919 04:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:20.919 04:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:20.919 [2024-07-23 04:09:29.667005] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:20.919 [2024-07-23 04:09:29.667062] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:14:21.178 04:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:21.178 04:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:21.178 04:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.178 04:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:21.437 04:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:21.437 04:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:21.437 04:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:21.437 04:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2613947 00:14:21.437 04:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2613947 ']' 00:14:21.437 04:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2613947 00:14:21.438 04:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:14:21.438 04:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:21.438 04:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2613947 00:14:21.438 04:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:21.438 04:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:21.438 04:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2613947' 00:14:21.438 killing process with pid 2613947 00:14:21.438 04:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2613947 00:14:21.438 [2024-07-23 04:09:30.100363] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:21.438 04:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2613947 00:14:21.438 [2024-07-23 04:09:30.124161] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:23.344 00:14:23.344 real 0m11.995s 00:14:23.344 user 0m19.466s 00:14:23.344 sys 0m2.069s 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.344 ************************************ 00:14:23.344 END TEST raid_state_function_test 00:14:23.344 ************************************ 00:14:23.344 04:09:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:23.344 04:09:31 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:14:23.344 04:09:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:23.344 04:09:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:23.344 04:09:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:23.344 ************************************ 00:14:23.344 START TEST raid_state_function_test_sb 00:14:23.344 ************************************ 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 true 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2616287 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2616287' 00:14:23.344 Process raid pid: 2616287 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:23.344 04:09:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2616287 /var/tmp/spdk-raid.sock 00:14:23.345 04:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2616287 ']' 00:14:23.345 04:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:23.345 04:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:23.345 04:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:23.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:23.345 04:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:23.345 04:09:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.345 [2024-07-23 04:09:32.081194] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:23.345 [2024-07-23 04:09:32.081306] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.604 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:23.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.605 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:23.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.605 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:23.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.605 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:23.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.605 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:23.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.605 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:23.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.605 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:23.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.605 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:23.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:23.605 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:23.605 [2024-07-23 04:09:32.307242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.863 [2024-07-23 04:09:32.589388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.431 [2024-07-23 04:09:32.912885] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:24.431 [2024-07-23 04:09:32.912920] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:24.431 04:09:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:24.431 04:09:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:14:24.431 04:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:24.694 [2024-07-23 04:09:33.296923] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:24.694 [2024-07-23 04:09:33.296982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:24.694 [2024-07-23 04:09:33.297003] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:24.694 [2024-07-23 04:09:33.297020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:24.694 04:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:24.694 04:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:24.695 04:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:24.695 04:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:24.695 04:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:24.695 04:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:24.695 04:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:24.695 04:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:24.695 04:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:24.695 04:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:24.695 04:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.695 04:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.956 04:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:24.956 "name": "Existed_Raid", 00:14:24.956 "uuid": "c0a5b269-483b-4350-a234-555ca9f4c711", 00:14:24.956 "strip_size_kb": 64, 00:14:24.956 "state": "configuring", 00:14:24.956 "raid_level": "raid0", 00:14:24.956 "superblock": true, 00:14:24.956 "num_base_bdevs": 2, 00:14:24.956 "num_base_bdevs_discovered": 0, 00:14:24.956 "num_base_bdevs_operational": 2, 00:14:24.956 "base_bdevs_list": [ 00:14:24.956 { 00:14:24.956 "name": "BaseBdev1", 00:14:24.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.956 "is_configured": false, 00:14:24.956 "data_offset": 0, 00:14:24.956 "data_size": 0 00:14:24.956 }, 00:14:24.956 { 00:14:24.956 "name": "BaseBdev2", 00:14:24.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.956 "is_configured": false, 00:14:24.956 "data_offset": 0, 00:14:24.956 "data_size": 0 00:14:24.956 } 00:14:24.956 ] 00:14:24.956 }' 00:14:24.956 04:09:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:24.956 04:09:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.524 04:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:25.524 [2024-07-23 04:09:34.227300] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:25.524 [2024-07-23 04:09:34.227339] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:14:25.524 04:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:25.782 [2024-07-23 04:09:34.459990] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:25.782 [2024-07-23 04:09:34.460033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:25.782 [2024-07-23 04:09:34.460047] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:25.782 [2024-07-23 04:09:34.460064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:25.782 04:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:26.041 [2024-07-23 04:09:34.731992] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:26.041 BaseBdev1 00:14:26.041 04:09:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:26.041 04:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:26.041 04:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:26.041 04:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:26.041 04:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:26.041 04:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:26.041 04:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:26.299 04:09:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:26.559 [ 00:14:26.559 { 00:14:26.559 "name": "BaseBdev1", 00:14:26.559 "aliases": [ 00:14:26.559 "d6a70078-ee44-4b06-8ea8-4661770b485d" 00:14:26.559 ], 00:14:26.559 "product_name": "Malloc disk", 00:14:26.559 "block_size": 512, 00:14:26.559 "num_blocks": 65536, 00:14:26.559 "uuid": "d6a70078-ee44-4b06-8ea8-4661770b485d", 00:14:26.559 "assigned_rate_limits": { 00:14:26.559 "rw_ios_per_sec": 0, 00:14:26.559 "rw_mbytes_per_sec": 0, 00:14:26.559 "r_mbytes_per_sec": 0, 00:14:26.559 "w_mbytes_per_sec": 0 00:14:26.559 }, 00:14:26.559 "claimed": true, 00:14:26.559 "claim_type": "exclusive_write", 00:14:26.559 "zoned": false, 00:14:26.559 "supported_io_types": { 00:14:26.559 "read": true, 00:14:26.559 "write": true, 00:14:26.559 "unmap": true, 00:14:26.559 "flush": true, 00:14:26.559 "reset": true, 00:14:26.559 "nvme_admin": false, 00:14:26.559 "nvme_io": false, 00:14:26.559 "nvme_io_md": false, 00:14:26.559 "write_zeroes": true, 00:14:26.559 "zcopy": true, 00:14:26.559 "get_zone_info": false, 00:14:26.559 "zone_management": false, 00:14:26.559 "zone_append": false, 00:14:26.559 "compare": false, 00:14:26.559 "compare_and_write": false, 00:14:26.559 "abort": true, 00:14:26.559 "seek_hole": false, 00:14:26.559 "seek_data": false, 00:14:26.559 "copy": true, 00:14:26.559 "nvme_iov_md": false 00:14:26.559 }, 00:14:26.559 "memory_domains": [ 00:14:26.559 { 00:14:26.559 "dma_device_id": "system", 00:14:26.559 "dma_device_type": 1 00:14:26.559 }, 00:14:26.559 { 00:14:26.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.559 "dma_device_type": 2 00:14:26.559 } 00:14:26.559 ], 00:14:26.559 "driver_specific": {} 00:14:26.559 } 00:14:26.559 ] 00:14:26.559 04:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:26.559 04:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:26.559 04:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:26.559 04:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:26.559 04:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:26.559 04:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:26.559 04:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:26.559 04:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:26.559 04:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:26.559 04:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:26.559 04:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:26.559 04:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.559 04:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.559 04:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:26.559 "name": "Existed_Raid", 00:14:26.559 "uuid": "7744f99c-0ad4-4c8a-a6f7-386875336c8b", 00:14:26.559 "strip_size_kb": 64, 00:14:26.559 "state": "configuring", 00:14:26.559 "raid_level": "raid0", 00:14:26.559 "superblock": true, 00:14:26.559 "num_base_bdevs": 2, 00:14:26.559 "num_base_bdevs_discovered": 1, 00:14:26.559 "num_base_bdevs_operational": 2, 00:14:26.559 "base_bdevs_list": [ 00:14:26.559 { 00:14:26.559 "name": "BaseBdev1", 00:14:26.559 "uuid": "d6a70078-ee44-4b06-8ea8-4661770b485d", 00:14:26.559 "is_configured": true, 00:14:26.559 "data_offset": 2048, 00:14:26.559 "data_size": 63488 00:14:26.559 }, 00:14:26.559 { 00:14:26.559 "name": "BaseBdev2", 00:14:26.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.559 "is_configured": false, 00:14:26.559 "data_offset": 0, 00:14:26.559 "data_size": 0 00:14:26.559 } 00:14:26.559 ] 00:14:26.559 }' 00:14:26.559 04:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:26.559 04:09:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.127 04:09:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:27.385 [2024-07-23 04:09:36.079660] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:27.385 [2024-07-23 04:09:36.079714] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:14:27.385 04:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:27.643 [2024-07-23 04:09:36.304360] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.644 [2024-07-23 04:09:36.306652] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:27.644 [2024-07-23 04:09:36.306696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:27.644 04:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:27.644 04:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:27.644 04:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:27.644 04:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:27.644 04:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:27.644 04:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:27.644 04:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:27.644 04:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:27.644 04:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:27.644 04:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:27.644 04:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:27.644 04:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:27.644 04:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.644 04:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.902 04:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:27.902 "name": "Existed_Raid", 00:14:27.902 "uuid": "01726af7-3410-4913-a8ff-f2f760101d85", 00:14:27.902 "strip_size_kb": 64, 00:14:27.902 "state": "configuring", 00:14:27.902 "raid_level": "raid0", 00:14:27.902 "superblock": true, 00:14:27.902 "num_base_bdevs": 2, 00:14:27.902 "num_base_bdevs_discovered": 1, 00:14:27.902 "num_base_bdevs_operational": 2, 00:14:27.902 "base_bdevs_list": [ 00:14:27.902 { 00:14:27.902 "name": "BaseBdev1", 00:14:27.902 "uuid": "d6a70078-ee44-4b06-8ea8-4661770b485d", 00:14:27.903 "is_configured": true, 00:14:27.903 "data_offset": 2048, 00:14:27.903 "data_size": 63488 00:14:27.903 }, 00:14:27.903 { 00:14:27.903 "name": "BaseBdev2", 00:14:27.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.903 "is_configured": false, 00:14:27.903 "data_offset": 0, 00:14:27.903 "data_size": 0 00:14:27.903 } 00:14:27.903 ] 00:14:27.903 }' 00:14:27.903 04:09:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:27.903 04:09:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.469 04:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:28.728 [2024-07-23 04:09:37.340182] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:28.728 [2024-07-23 04:09:37.340456] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:14:28.728 [2024-07-23 04:09:37.340476] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:28.728 [2024-07-23 04:09:37.340811] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:14:28.728 [2024-07-23 04:09:37.341019] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:14:28.728 [2024-07-23 04:09:37.341036] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:14:28.728 [2024-07-23 04:09:37.341227] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.728 BaseBdev2 00:14:28.728 04:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:28.728 04:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:28.728 04:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:28.728 04:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:28.728 04:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:28.728 04:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:28.728 04:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:28.986 04:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:29.248 [ 00:14:29.248 { 00:14:29.248 "name": "BaseBdev2", 00:14:29.248 "aliases": [ 00:14:29.248 "305512e6-0304-48a9-bb65-1d66ea989c2b" 00:14:29.248 ], 00:14:29.248 "product_name": "Malloc disk", 00:14:29.248 "block_size": 512, 00:14:29.248 "num_blocks": 65536, 00:14:29.248 "uuid": "305512e6-0304-48a9-bb65-1d66ea989c2b", 00:14:29.248 "assigned_rate_limits": { 00:14:29.248 "rw_ios_per_sec": 0, 00:14:29.248 "rw_mbytes_per_sec": 0, 00:14:29.248 "r_mbytes_per_sec": 0, 00:14:29.248 "w_mbytes_per_sec": 0 00:14:29.248 }, 00:14:29.248 "claimed": true, 00:14:29.248 "claim_type": "exclusive_write", 00:14:29.248 "zoned": false, 00:14:29.248 "supported_io_types": { 00:14:29.248 "read": true, 00:14:29.248 "write": true, 00:14:29.248 "unmap": true, 00:14:29.248 "flush": true, 00:14:29.248 "reset": true, 00:14:29.248 "nvme_admin": false, 00:14:29.248 "nvme_io": false, 00:14:29.248 "nvme_io_md": false, 00:14:29.248 "write_zeroes": true, 00:14:29.248 "zcopy": true, 00:14:29.248 "get_zone_info": false, 00:14:29.248 "zone_management": false, 00:14:29.248 "zone_append": false, 00:14:29.248 "compare": false, 00:14:29.248 "compare_and_write": false, 00:14:29.248 "abort": true, 00:14:29.248 "seek_hole": false, 00:14:29.248 "seek_data": false, 00:14:29.248 "copy": true, 00:14:29.248 "nvme_iov_md": false 00:14:29.248 }, 00:14:29.248 "memory_domains": [ 00:14:29.248 { 00:14:29.248 "dma_device_id": "system", 00:14:29.248 "dma_device_type": 1 00:14:29.248 }, 00:14:29.248 { 00:14:29.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.248 "dma_device_type": 2 00:14:29.248 } 00:14:29.248 ], 00:14:29.248 "driver_specific": {} 00:14:29.248 } 00:14:29.248 ] 00:14:29.248 04:09:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:29.248 04:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:29.248 04:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:29.248 04:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:29.248 04:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:29.248 04:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:29.248 04:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:29.248 04:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:29.248 04:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:29.248 04:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:29.248 04:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:29.248 04:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:29.248 04:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:29.248 04:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.248 04:09:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.553 04:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:29.553 "name": "Existed_Raid", 00:14:29.553 "uuid": "01726af7-3410-4913-a8ff-f2f760101d85", 00:14:29.553 "strip_size_kb": 64, 00:14:29.553 "state": "online", 00:14:29.553 "raid_level": "raid0", 00:14:29.553 "superblock": true, 00:14:29.553 "num_base_bdevs": 2, 00:14:29.553 "num_base_bdevs_discovered": 2, 00:14:29.553 "num_base_bdevs_operational": 2, 00:14:29.553 "base_bdevs_list": [ 00:14:29.553 { 00:14:29.553 "name": "BaseBdev1", 00:14:29.553 "uuid": "d6a70078-ee44-4b06-8ea8-4661770b485d", 00:14:29.553 "is_configured": true, 00:14:29.553 "data_offset": 2048, 00:14:29.553 "data_size": 63488 00:14:29.553 }, 00:14:29.553 { 00:14:29.553 "name": "BaseBdev2", 00:14:29.553 "uuid": "305512e6-0304-48a9-bb65-1d66ea989c2b", 00:14:29.553 "is_configured": true, 00:14:29.553 "data_offset": 2048, 00:14:29.553 "data_size": 63488 00:14:29.553 } 00:14:29.553 ] 00:14:29.553 }' 00:14:29.553 04:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:29.553 04:09:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.121 04:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:30.121 04:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:30.121 04:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:30.122 04:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:30.122 04:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:30.122 04:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:30.122 04:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:30.122 04:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:30.122 [2024-07-23 04:09:38.820547] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.122 04:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:30.122 "name": "Existed_Raid", 00:14:30.122 "aliases": [ 00:14:30.122 "01726af7-3410-4913-a8ff-f2f760101d85" 00:14:30.122 ], 00:14:30.122 "product_name": "Raid Volume", 00:14:30.122 "block_size": 512, 00:14:30.122 "num_blocks": 126976, 00:14:30.122 "uuid": "01726af7-3410-4913-a8ff-f2f760101d85", 00:14:30.122 "assigned_rate_limits": { 00:14:30.122 "rw_ios_per_sec": 0, 00:14:30.122 "rw_mbytes_per_sec": 0, 00:14:30.122 "r_mbytes_per_sec": 0, 00:14:30.122 "w_mbytes_per_sec": 0 00:14:30.122 }, 00:14:30.122 "claimed": false, 00:14:30.122 "zoned": false, 00:14:30.122 "supported_io_types": { 00:14:30.122 "read": true, 00:14:30.122 "write": true, 00:14:30.122 "unmap": true, 00:14:30.122 "flush": true, 00:14:30.122 "reset": true, 00:14:30.122 "nvme_admin": false, 00:14:30.122 "nvme_io": false, 00:14:30.122 "nvme_io_md": false, 00:14:30.122 "write_zeroes": true, 00:14:30.122 "zcopy": false, 00:14:30.122 "get_zone_info": false, 00:14:30.122 "zone_management": false, 00:14:30.122 "zone_append": false, 00:14:30.122 "compare": false, 00:14:30.122 "compare_and_write": false, 00:14:30.122 "abort": false, 00:14:30.122 "seek_hole": false, 00:14:30.122 "seek_data": false, 00:14:30.122 "copy": false, 00:14:30.122 "nvme_iov_md": false 00:14:30.122 }, 00:14:30.122 "memory_domains": [ 00:14:30.122 { 00:14:30.122 "dma_device_id": "system", 00:14:30.122 "dma_device_type": 1 00:14:30.122 }, 00:14:30.122 { 00:14:30.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.122 "dma_device_type": 2 00:14:30.122 }, 00:14:30.122 { 00:14:30.122 "dma_device_id": "system", 00:14:30.122 "dma_device_type": 1 00:14:30.122 }, 00:14:30.122 { 00:14:30.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.122 "dma_device_type": 2 00:14:30.122 } 00:14:30.122 ], 00:14:30.122 "driver_specific": { 00:14:30.122 "raid": { 00:14:30.122 "uuid": "01726af7-3410-4913-a8ff-f2f760101d85", 00:14:30.122 "strip_size_kb": 64, 00:14:30.122 "state": "online", 00:14:30.122 "raid_level": "raid0", 00:14:30.122 "superblock": true, 00:14:30.122 "num_base_bdevs": 2, 00:14:30.122 "num_base_bdevs_discovered": 2, 00:14:30.122 "num_base_bdevs_operational": 2, 00:14:30.122 "base_bdevs_list": [ 00:14:30.122 { 00:14:30.122 "name": "BaseBdev1", 00:14:30.122 "uuid": "d6a70078-ee44-4b06-8ea8-4661770b485d", 00:14:30.122 "is_configured": true, 00:14:30.122 "data_offset": 2048, 00:14:30.122 "data_size": 63488 00:14:30.122 }, 00:14:30.122 { 00:14:30.122 "name": "BaseBdev2", 00:14:30.122 "uuid": "305512e6-0304-48a9-bb65-1d66ea989c2b", 00:14:30.122 "is_configured": true, 00:14:30.122 "data_offset": 2048, 00:14:30.122 "data_size": 63488 00:14:30.122 } 00:14:30.122 ] 00:14:30.122 } 00:14:30.122 } 00:14:30.122 }' 00:14:30.122 04:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:30.122 04:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:30.122 BaseBdev2' 00:14:30.122 04:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:30.122 04:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:30.122 04:09:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:30.381 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:30.381 "name": "BaseBdev1", 00:14:30.381 "aliases": [ 00:14:30.381 "d6a70078-ee44-4b06-8ea8-4661770b485d" 00:14:30.381 ], 00:14:30.381 "product_name": "Malloc disk", 00:14:30.381 "block_size": 512, 00:14:30.381 "num_blocks": 65536, 00:14:30.381 "uuid": "d6a70078-ee44-4b06-8ea8-4661770b485d", 00:14:30.381 "assigned_rate_limits": { 00:14:30.381 "rw_ios_per_sec": 0, 00:14:30.381 "rw_mbytes_per_sec": 0, 00:14:30.381 "r_mbytes_per_sec": 0, 00:14:30.381 "w_mbytes_per_sec": 0 00:14:30.381 }, 00:14:30.381 "claimed": true, 00:14:30.381 "claim_type": "exclusive_write", 00:14:30.381 "zoned": false, 00:14:30.381 "supported_io_types": { 00:14:30.381 "read": true, 00:14:30.381 "write": true, 00:14:30.381 "unmap": true, 00:14:30.381 "flush": true, 00:14:30.381 "reset": true, 00:14:30.381 "nvme_admin": false, 00:14:30.381 "nvme_io": false, 00:14:30.381 "nvme_io_md": false, 00:14:30.381 "write_zeroes": true, 00:14:30.381 "zcopy": true, 00:14:30.381 "get_zone_info": false, 00:14:30.381 "zone_management": false, 00:14:30.381 "zone_append": false, 00:14:30.381 "compare": false, 00:14:30.381 "compare_and_write": false, 00:14:30.381 "abort": true, 00:14:30.381 "seek_hole": false, 00:14:30.381 "seek_data": false, 00:14:30.381 "copy": true, 00:14:30.381 "nvme_iov_md": false 00:14:30.381 }, 00:14:30.381 "memory_domains": [ 00:14:30.381 { 00:14:30.381 "dma_device_id": "system", 00:14:30.381 "dma_device_type": 1 00:14:30.381 }, 00:14:30.381 { 00:14:30.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.381 "dma_device_type": 2 00:14:30.381 } 00:14:30.381 ], 00:14:30.381 "driver_specific": {} 00:14:30.381 }' 00:14:30.381 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.381 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.640 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:30.640 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.640 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.640 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:30.640 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.640 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.640 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:30.640 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.640 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.899 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:30.899 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:30.899 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:30.899 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:31.157 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:31.157 "name": "BaseBdev2", 00:14:31.157 "aliases": [ 00:14:31.157 "305512e6-0304-48a9-bb65-1d66ea989c2b" 00:14:31.157 ], 00:14:31.157 "product_name": "Malloc disk", 00:14:31.157 "block_size": 512, 00:14:31.157 "num_blocks": 65536, 00:14:31.157 "uuid": "305512e6-0304-48a9-bb65-1d66ea989c2b", 00:14:31.157 "assigned_rate_limits": { 00:14:31.157 "rw_ios_per_sec": 0, 00:14:31.157 "rw_mbytes_per_sec": 0, 00:14:31.157 "r_mbytes_per_sec": 0, 00:14:31.157 "w_mbytes_per_sec": 0 00:14:31.157 }, 00:14:31.157 "claimed": true, 00:14:31.157 "claim_type": "exclusive_write", 00:14:31.157 "zoned": false, 00:14:31.157 "supported_io_types": { 00:14:31.157 "read": true, 00:14:31.157 "write": true, 00:14:31.157 "unmap": true, 00:14:31.157 "flush": true, 00:14:31.157 "reset": true, 00:14:31.157 "nvme_admin": false, 00:14:31.157 "nvme_io": false, 00:14:31.157 "nvme_io_md": false, 00:14:31.157 "write_zeroes": true, 00:14:31.157 "zcopy": true, 00:14:31.157 "get_zone_info": false, 00:14:31.157 "zone_management": false, 00:14:31.157 "zone_append": false, 00:14:31.157 "compare": false, 00:14:31.157 "compare_and_write": false, 00:14:31.157 "abort": true, 00:14:31.157 "seek_hole": false, 00:14:31.157 "seek_data": false, 00:14:31.157 "copy": true, 00:14:31.157 "nvme_iov_md": false 00:14:31.157 }, 00:14:31.157 "memory_domains": [ 00:14:31.157 { 00:14:31.157 "dma_device_id": "system", 00:14:31.157 "dma_device_type": 1 00:14:31.157 }, 00:14:31.157 { 00:14:31.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.157 "dma_device_type": 2 00:14:31.157 } 00:14:31.157 ], 00:14:31.157 "driver_specific": {} 00:14:31.157 }' 00:14:31.157 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:31.157 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:31.157 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:31.157 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:31.157 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:31.157 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:31.157 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:31.157 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:31.157 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:31.157 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:31.416 04:09:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:31.416 04:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:31.416 04:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:31.675 [2024-07-23 04:09:40.204052] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:31.675 [2024-07-23 04:09:40.204089] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.675 [2024-07-23 04:09:40.204155] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.675 04:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:31.675 04:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:31.675 04:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:31.675 04:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:14:31.675 04:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:31.675 04:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:31.675 04:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:31.675 04:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:31.675 04:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:31.675 04:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:31.675 04:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:31.675 04:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:31.675 04:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:31.675 04:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:31.675 04:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:31.675 04:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.675 04:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.934 04:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:31.934 "name": "Existed_Raid", 00:14:31.934 "uuid": "01726af7-3410-4913-a8ff-f2f760101d85", 00:14:31.934 "strip_size_kb": 64, 00:14:31.934 "state": "offline", 00:14:31.934 "raid_level": "raid0", 00:14:31.934 "superblock": true, 00:14:31.934 "num_base_bdevs": 2, 00:14:31.934 "num_base_bdevs_discovered": 1, 00:14:31.934 "num_base_bdevs_operational": 1, 00:14:31.934 "base_bdevs_list": [ 00:14:31.934 { 00:14:31.934 "name": null, 00:14:31.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.934 "is_configured": false, 00:14:31.934 "data_offset": 2048, 00:14:31.934 "data_size": 63488 00:14:31.934 }, 00:14:31.934 { 00:14:31.934 "name": "BaseBdev2", 00:14:31.934 "uuid": "305512e6-0304-48a9-bb65-1d66ea989c2b", 00:14:31.934 "is_configured": true, 00:14:31.934 "data_offset": 2048, 00:14:31.934 "data_size": 63488 00:14:31.934 } 00:14:31.934 ] 00:14:31.934 }' 00:14:31.934 04:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:31.934 04:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.502 04:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:32.502 04:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:32.502 04:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.502 04:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:32.762 04:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:32.762 04:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:32.762 04:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:32.762 [2024-07-23 04:09:41.506694] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:32.762 [2024-07-23 04:09:41.506751] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:14:33.021 04:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:33.021 04:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:33.021 04:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.021 04:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:33.281 04:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:33.281 04:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:33.281 04:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:33.281 04:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2616287 00:14:33.281 04:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2616287 ']' 00:14:33.281 04:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2616287 00:14:33.281 04:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:14:33.281 04:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:33.281 04:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2616287 00:14:33.281 04:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:33.281 04:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:33.281 04:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2616287' 00:14:33.281 killing process with pid 2616287 00:14:33.281 04:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2616287 00:14:33.281 [2024-07-23 04:09:41.940417] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:33.281 04:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2616287 00:14:33.281 [2024-07-23 04:09:41.962788] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:35.188 04:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:35.188 00:14:35.188 real 0m11.660s 00:14:35.188 user 0m19.007s 00:14:35.188 sys 0m2.065s 00:14:35.188 04:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:35.189 04:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.189 ************************************ 00:14:35.189 END TEST raid_state_function_test_sb 00:14:35.189 ************************************ 00:14:35.189 04:09:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:35.189 04:09:43 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:14:35.189 04:09:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:35.189 04:09:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:35.189 04:09:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:35.189 ************************************ 00:14:35.189 START TEST raid_superblock_test 00:14:35.189 ************************************ 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 2 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2618502 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2618502 /var/tmp/spdk-raid.sock 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2618502 ']' 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:35.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:35.189 04:09:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.189 [2024-07-23 04:09:43.773456] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:35.189 [2024-07-23 04:09:43.773549] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2618502 ] 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:35.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:35.189 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:35.449 [2024-07-23 04:09:43.972293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.708 [2024-07-23 04:09:44.253096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.967 [2024-07-23 04:09:44.593301] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.967 [2024-07-23 04:09:44.593334] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.227 04:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:36.227 04:09:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:14:36.227 04:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:14:36.227 04:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:36.227 04:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:14:36.227 04:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:14:36.227 04:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:36.227 04:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:36.227 04:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:36.227 04:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:36.227 04:09:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:36.486 malloc1 00:14:36.486 04:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:36.746 [2024-07-23 04:09:45.273569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:36.746 [2024-07-23 04:09:45.273631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.746 [2024-07-23 04:09:45.273662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:14:36.746 [2024-07-23 04:09:45.273678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.746 [2024-07-23 04:09:45.276450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.746 [2024-07-23 04:09:45.276485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:36.746 pt1 00:14:36.746 04:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:36.746 04:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:36.746 04:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:14:36.746 04:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:14:36.746 04:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:36.746 04:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:36.746 04:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:36.746 04:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:36.746 04:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:37.005 malloc2 00:14:37.005 04:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:37.005 [2024-07-23 04:09:45.777495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:37.005 [2024-07-23 04:09:45.777552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.005 [2024-07-23 04:09:45.777580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:14:37.005 [2024-07-23 04:09:45.777596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.005 [2024-07-23 04:09:45.780376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.005 [2024-07-23 04:09:45.780415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:37.005 pt2 00:14:37.265 04:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:37.265 04:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:37.265 04:09:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:37.265 [2024-07-23 04:09:46.006134] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:37.265 [2024-07-23 04:09:46.008485] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:37.265 [2024-07-23 04:09:46.008704] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040880 00:14:37.265 [2024-07-23 04:09:46.008722] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:37.265 [2024-07-23 04:09:46.009062] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:14:37.265 [2024-07-23 04:09:46.009325] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040880 00:14:37.265 [2024-07-23 04:09:46.009344] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040880 00:14:37.265 [2024-07-23 04:09:46.009547] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.265 04:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:37.265 04:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:37.265 04:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:37.265 04:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:37.265 04:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:37.265 04:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:37.265 04:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:37.265 04:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:37.265 04:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:37.265 04:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:37.265 04:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.265 04:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.524 04:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:37.524 "name": "raid_bdev1", 00:14:37.524 "uuid": "c9cf0545-50ce-43dd-86b9-0056afe501ab", 00:14:37.524 "strip_size_kb": 64, 00:14:37.524 "state": "online", 00:14:37.524 "raid_level": "raid0", 00:14:37.524 "superblock": true, 00:14:37.524 "num_base_bdevs": 2, 00:14:37.524 "num_base_bdevs_discovered": 2, 00:14:37.524 "num_base_bdevs_operational": 2, 00:14:37.524 "base_bdevs_list": [ 00:14:37.524 { 00:14:37.524 "name": "pt1", 00:14:37.524 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:37.524 "is_configured": true, 00:14:37.524 "data_offset": 2048, 00:14:37.524 "data_size": 63488 00:14:37.524 }, 00:14:37.524 { 00:14:37.524 "name": "pt2", 00:14:37.524 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:37.524 "is_configured": true, 00:14:37.524 "data_offset": 2048, 00:14:37.524 "data_size": 63488 00:14:37.524 } 00:14:37.524 ] 00:14:37.524 }' 00:14:37.524 04:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:37.524 04:09:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.092 04:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:14:38.092 04:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:38.092 04:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:38.092 04:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:38.092 04:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:38.092 04:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:38.092 04:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:38.092 04:09:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:38.352 [2024-07-23 04:09:47.025191] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:38.352 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:38.352 "name": "raid_bdev1", 00:14:38.352 "aliases": [ 00:14:38.352 "c9cf0545-50ce-43dd-86b9-0056afe501ab" 00:14:38.352 ], 00:14:38.352 "product_name": "Raid Volume", 00:14:38.352 "block_size": 512, 00:14:38.352 "num_blocks": 126976, 00:14:38.352 "uuid": "c9cf0545-50ce-43dd-86b9-0056afe501ab", 00:14:38.352 "assigned_rate_limits": { 00:14:38.352 "rw_ios_per_sec": 0, 00:14:38.352 "rw_mbytes_per_sec": 0, 00:14:38.352 "r_mbytes_per_sec": 0, 00:14:38.352 "w_mbytes_per_sec": 0 00:14:38.352 }, 00:14:38.352 "claimed": false, 00:14:38.352 "zoned": false, 00:14:38.352 "supported_io_types": { 00:14:38.352 "read": true, 00:14:38.352 "write": true, 00:14:38.352 "unmap": true, 00:14:38.352 "flush": true, 00:14:38.352 "reset": true, 00:14:38.352 "nvme_admin": false, 00:14:38.352 "nvme_io": false, 00:14:38.352 "nvme_io_md": false, 00:14:38.352 "write_zeroes": true, 00:14:38.352 "zcopy": false, 00:14:38.352 "get_zone_info": false, 00:14:38.352 "zone_management": false, 00:14:38.352 "zone_append": false, 00:14:38.352 "compare": false, 00:14:38.352 "compare_and_write": false, 00:14:38.352 "abort": false, 00:14:38.352 "seek_hole": false, 00:14:38.352 "seek_data": false, 00:14:38.352 "copy": false, 00:14:38.352 "nvme_iov_md": false 00:14:38.352 }, 00:14:38.352 "memory_domains": [ 00:14:38.352 { 00:14:38.352 "dma_device_id": "system", 00:14:38.352 "dma_device_type": 1 00:14:38.352 }, 00:14:38.352 { 00:14:38.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.352 "dma_device_type": 2 00:14:38.352 }, 00:14:38.352 { 00:14:38.352 "dma_device_id": "system", 00:14:38.352 "dma_device_type": 1 00:14:38.352 }, 00:14:38.352 { 00:14:38.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.352 "dma_device_type": 2 00:14:38.352 } 00:14:38.352 ], 00:14:38.352 "driver_specific": { 00:14:38.352 "raid": { 00:14:38.352 "uuid": "c9cf0545-50ce-43dd-86b9-0056afe501ab", 00:14:38.352 "strip_size_kb": 64, 00:14:38.352 "state": "online", 00:14:38.352 "raid_level": "raid0", 00:14:38.352 "superblock": true, 00:14:38.352 "num_base_bdevs": 2, 00:14:38.352 "num_base_bdevs_discovered": 2, 00:14:38.352 "num_base_bdevs_operational": 2, 00:14:38.352 "base_bdevs_list": [ 00:14:38.352 { 00:14:38.352 "name": "pt1", 00:14:38.352 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:38.352 "is_configured": true, 00:14:38.352 "data_offset": 2048, 00:14:38.352 "data_size": 63488 00:14:38.352 }, 00:14:38.352 { 00:14:38.352 "name": "pt2", 00:14:38.352 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.352 "is_configured": true, 00:14:38.352 "data_offset": 2048, 00:14:38.352 "data_size": 63488 00:14:38.352 } 00:14:38.352 ] 00:14:38.352 } 00:14:38.352 } 00:14:38.352 }' 00:14:38.352 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:38.352 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:38.352 pt2' 00:14:38.352 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:38.352 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:38.352 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:38.611 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:38.611 "name": "pt1", 00:14:38.611 "aliases": [ 00:14:38.611 "00000000-0000-0000-0000-000000000001" 00:14:38.611 ], 00:14:38.611 "product_name": "passthru", 00:14:38.611 "block_size": 512, 00:14:38.611 "num_blocks": 65536, 00:14:38.611 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:38.611 "assigned_rate_limits": { 00:14:38.611 "rw_ios_per_sec": 0, 00:14:38.611 "rw_mbytes_per_sec": 0, 00:14:38.611 "r_mbytes_per_sec": 0, 00:14:38.611 "w_mbytes_per_sec": 0 00:14:38.611 }, 00:14:38.611 "claimed": true, 00:14:38.611 "claim_type": "exclusive_write", 00:14:38.611 "zoned": false, 00:14:38.611 "supported_io_types": { 00:14:38.611 "read": true, 00:14:38.611 "write": true, 00:14:38.611 "unmap": true, 00:14:38.612 "flush": true, 00:14:38.612 "reset": true, 00:14:38.612 "nvme_admin": false, 00:14:38.612 "nvme_io": false, 00:14:38.612 "nvme_io_md": false, 00:14:38.612 "write_zeroes": true, 00:14:38.612 "zcopy": true, 00:14:38.612 "get_zone_info": false, 00:14:38.612 "zone_management": false, 00:14:38.612 "zone_append": false, 00:14:38.612 "compare": false, 00:14:38.612 "compare_and_write": false, 00:14:38.612 "abort": true, 00:14:38.612 "seek_hole": false, 00:14:38.612 "seek_data": false, 00:14:38.612 "copy": true, 00:14:38.612 "nvme_iov_md": false 00:14:38.612 }, 00:14:38.612 "memory_domains": [ 00:14:38.612 { 00:14:38.612 "dma_device_id": "system", 00:14:38.612 "dma_device_type": 1 00:14:38.612 }, 00:14:38.612 { 00:14:38.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.612 "dma_device_type": 2 00:14:38.612 } 00:14:38.612 ], 00:14:38.612 "driver_specific": { 00:14:38.612 "passthru": { 00:14:38.612 "name": "pt1", 00:14:38.612 "base_bdev_name": "malloc1" 00:14:38.612 } 00:14:38.612 } 00:14:38.612 }' 00:14:38.612 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:38.612 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:38.871 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:38.871 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:38.871 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:38.871 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:38.871 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:38.871 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:38.871 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:38.871 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:38.871 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:38.871 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:39.130 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:39.130 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:39.130 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:39.130 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:39.130 "name": "pt2", 00:14:39.130 "aliases": [ 00:14:39.130 "00000000-0000-0000-0000-000000000002" 00:14:39.130 ], 00:14:39.130 "product_name": "passthru", 00:14:39.130 "block_size": 512, 00:14:39.130 "num_blocks": 65536, 00:14:39.130 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.130 "assigned_rate_limits": { 00:14:39.130 "rw_ios_per_sec": 0, 00:14:39.130 "rw_mbytes_per_sec": 0, 00:14:39.130 "r_mbytes_per_sec": 0, 00:14:39.130 "w_mbytes_per_sec": 0 00:14:39.130 }, 00:14:39.130 "claimed": true, 00:14:39.130 "claim_type": "exclusive_write", 00:14:39.130 "zoned": false, 00:14:39.130 "supported_io_types": { 00:14:39.130 "read": true, 00:14:39.130 "write": true, 00:14:39.130 "unmap": true, 00:14:39.130 "flush": true, 00:14:39.130 "reset": true, 00:14:39.130 "nvme_admin": false, 00:14:39.130 "nvme_io": false, 00:14:39.130 "nvme_io_md": false, 00:14:39.130 "write_zeroes": true, 00:14:39.130 "zcopy": true, 00:14:39.130 "get_zone_info": false, 00:14:39.130 "zone_management": false, 00:14:39.130 "zone_append": false, 00:14:39.130 "compare": false, 00:14:39.130 "compare_and_write": false, 00:14:39.130 "abort": true, 00:14:39.130 "seek_hole": false, 00:14:39.130 "seek_data": false, 00:14:39.130 "copy": true, 00:14:39.130 "nvme_iov_md": false 00:14:39.130 }, 00:14:39.130 "memory_domains": [ 00:14:39.130 { 00:14:39.130 "dma_device_id": "system", 00:14:39.130 "dma_device_type": 1 00:14:39.130 }, 00:14:39.130 { 00:14:39.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.130 "dma_device_type": 2 00:14:39.130 } 00:14:39.130 ], 00:14:39.130 "driver_specific": { 00:14:39.130 "passthru": { 00:14:39.130 "name": "pt2", 00:14:39.130 "base_bdev_name": "malloc2" 00:14:39.130 } 00:14:39.130 } 00:14:39.130 }' 00:14:39.130 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:39.389 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:39.390 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:39.390 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:39.390 04:09:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:39.390 04:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:39.390 04:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:39.390 04:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:39.390 04:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:39.390 04:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:39.390 04:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:39.648 04:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:39.648 04:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:14:39.648 04:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:39.648 [2024-07-23 04:09:48.420936] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.908 04:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=c9cf0545-50ce-43dd-86b9-0056afe501ab 00:14:39.908 04:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z c9cf0545-50ce-43dd-86b9-0056afe501ab ']' 00:14:39.908 04:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:39.908 [2024-07-23 04:09:48.653255] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:39.908 [2024-07-23 04:09:48.653286] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.908 [2024-07-23 04:09:48.653373] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.908 [2024-07-23 04:09:48.653429] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.908 [2024-07-23 04:09:48.653450] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040880 name raid_bdev1, state offline 00:14:39.908 04:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.908 04:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:14:40.168 04:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:14:40.168 04:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:14:40.168 04:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:40.168 04:09:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:40.427 04:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:40.427 04:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:40.686 04:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:40.686 04:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:40.945 04:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:14:40.945 04:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:40.945 04:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:14:40.945 04:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:40.945 04:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:40.945 04:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:40.945 04:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:40.945 04:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:40.946 04:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:40.946 04:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:40.946 04:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:40.946 04:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:14:40.946 04:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:41.205 [2024-07-23 04:09:49.780269] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:41.205 [2024-07-23 04:09:49.782591] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:41.205 [2024-07-23 04:09:49.782667] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:41.205 [2024-07-23 04:09:49.782720] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:41.205 [2024-07-23 04:09:49.782743] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.205 [2024-07-23 04:09:49.782760] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state configuring 00:14:41.205 request: 00:14:41.205 { 00:14:41.205 "name": "raid_bdev1", 00:14:41.205 "raid_level": "raid0", 00:14:41.205 "base_bdevs": [ 00:14:41.205 "malloc1", 00:14:41.205 "malloc2" 00:14:41.205 ], 00:14:41.205 "strip_size_kb": 64, 00:14:41.205 "superblock": false, 00:14:41.205 "method": "bdev_raid_create", 00:14:41.205 "req_id": 1 00:14:41.205 } 00:14:41.205 Got JSON-RPC error response 00:14:41.205 response: 00:14:41.205 { 00:14:41.205 "code": -17, 00:14:41.205 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:41.205 } 00:14:41.205 04:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:14:41.205 04:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:41.205 04:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:41.205 04:09:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:41.205 04:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.205 04:09:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:14:41.464 04:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:14:41.464 04:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:14:41.464 04:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:41.464 [2024-07-23 04:09:50.241421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:41.464 [2024-07-23 04:09:50.241490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.464 [2024-07-23 04:09:50.241518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:14:41.464 [2024-07-23 04:09:50.241536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.464 [2024-07-23 04:09:50.244094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.464 [2024-07-23 04:09:50.244133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:41.464 [2024-07-23 04:09:50.244242] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:41.464 [2024-07-23 04:09:50.244326] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:41.464 pt1 00:14:41.723 04:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:14:41.723 04:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:41.723 04:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:41.723 04:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:41.723 04:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:41.723 04:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:41.723 04:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:41.723 04:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:41.723 04:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:41.723 04:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:41.724 04:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.724 04:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.724 04:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:41.724 "name": "raid_bdev1", 00:14:41.724 "uuid": "c9cf0545-50ce-43dd-86b9-0056afe501ab", 00:14:41.724 "strip_size_kb": 64, 00:14:41.724 "state": "configuring", 00:14:41.724 "raid_level": "raid0", 00:14:41.724 "superblock": true, 00:14:41.724 "num_base_bdevs": 2, 00:14:41.724 "num_base_bdevs_discovered": 1, 00:14:41.724 "num_base_bdevs_operational": 2, 00:14:41.724 "base_bdevs_list": [ 00:14:41.724 { 00:14:41.724 "name": "pt1", 00:14:41.724 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:41.724 "is_configured": true, 00:14:41.724 "data_offset": 2048, 00:14:41.724 "data_size": 63488 00:14:41.724 }, 00:14:41.724 { 00:14:41.724 "name": null, 00:14:41.724 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.724 "is_configured": false, 00:14:41.724 "data_offset": 2048, 00:14:41.724 "data_size": 63488 00:14:41.724 } 00:14:41.724 ] 00:14:41.724 }' 00:14:41.724 04:09:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:41.724 04:09:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.292 04:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:14:42.292 04:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:14:42.292 04:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:42.292 04:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:42.551 [2024-07-23 04:09:51.224058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:42.551 [2024-07-23 04:09:51.224124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.551 [2024-07-23 04:09:51.224157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041d80 00:14:42.551 [2024-07-23 04:09:51.224175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.551 [2024-07-23 04:09:51.224756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.551 [2024-07-23 04:09:51.224786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:42.551 [2024-07-23 04:09:51.224880] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:42.551 [2024-07-23 04:09:51.224917] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:42.551 [2024-07-23 04:09:51.225081] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:14:42.551 [2024-07-23 04:09:51.225099] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:42.551 [2024-07-23 04:09:51.225405] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:14:42.551 [2024-07-23 04:09:51.225613] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:14:42.551 [2024-07-23 04:09:51.225626] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:14:42.551 [2024-07-23 04:09:51.225795] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.551 pt2 00:14:42.551 04:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:42.551 04:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:42.551 04:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:42.551 04:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:42.551 04:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:42.551 04:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:42.551 04:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:42.551 04:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:42.551 04:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:42.551 04:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:42.551 04:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:42.551 04:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:42.551 04:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.551 04:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.810 04:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:42.810 "name": "raid_bdev1", 00:14:42.810 "uuid": "c9cf0545-50ce-43dd-86b9-0056afe501ab", 00:14:42.810 "strip_size_kb": 64, 00:14:42.810 "state": "online", 00:14:42.810 "raid_level": "raid0", 00:14:42.810 "superblock": true, 00:14:42.810 "num_base_bdevs": 2, 00:14:42.810 "num_base_bdevs_discovered": 2, 00:14:42.810 "num_base_bdevs_operational": 2, 00:14:42.811 "base_bdevs_list": [ 00:14:42.811 { 00:14:42.811 "name": "pt1", 00:14:42.811 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:42.811 "is_configured": true, 00:14:42.811 "data_offset": 2048, 00:14:42.811 "data_size": 63488 00:14:42.811 }, 00:14:42.811 { 00:14:42.811 "name": "pt2", 00:14:42.811 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.811 "is_configured": true, 00:14:42.811 "data_offset": 2048, 00:14:42.811 "data_size": 63488 00:14:42.811 } 00:14:42.811 ] 00:14:42.811 }' 00:14:42.811 04:09:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:42.811 04:09:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.379 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:14:43.379 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:43.379 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:43.379 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:43.379 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:43.379 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:43.379 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:43.379 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:43.638 [2024-07-23 04:09:52.162907] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.638 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:43.638 "name": "raid_bdev1", 00:14:43.638 "aliases": [ 00:14:43.638 "c9cf0545-50ce-43dd-86b9-0056afe501ab" 00:14:43.638 ], 00:14:43.638 "product_name": "Raid Volume", 00:14:43.638 "block_size": 512, 00:14:43.638 "num_blocks": 126976, 00:14:43.638 "uuid": "c9cf0545-50ce-43dd-86b9-0056afe501ab", 00:14:43.638 "assigned_rate_limits": { 00:14:43.638 "rw_ios_per_sec": 0, 00:14:43.638 "rw_mbytes_per_sec": 0, 00:14:43.638 "r_mbytes_per_sec": 0, 00:14:43.638 "w_mbytes_per_sec": 0 00:14:43.638 }, 00:14:43.638 "claimed": false, 00:14:43.638 "zoned": false, 00:14:43.638 "supported_io_types": { 00:14:43.638 "read": true, 00:14:43.638 "write": true, 00:14:43.638 "unmap": true, 00:14:43.638 "flush": true, 00:14:43.638 "reset": true, 00:14:43.638 "nvme_admin": false, 00:14:43.638 "nvme_io": false, 00:14:43.638 "nvme_io_md": false, 00:14:43.638 "write_zeroes": true, 00:14:43.638 "zcopy": false, 00:14:43.638 "get_zone_info": false, 00:14:43.638 "zone_management": false, 00:14:43.638 "zone_append": false, 00:14:43.638 "compare": false, 00:14:43.638 "compare_and_write": false, 00:14:43.638 "abort": false, 00:14:43.638 "seek_hole": false, 00:14:43.638 "seek_data": false, 00:14:43.638 "copy": false, 00:14:43.638 "nvme_iov_md": false 00:14:43.638 }, 00:14:43.638 "memory_domains": [ 00:14:43.638 { 00:14:43.638 "dma_device_id": "system", 00:14:43.638 "dma_device_type": 1 00:14:43.638 }, 00:14:43.638 { 00:14:43.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.638 "dma_device_type": 2 00:14:43.638 }, 00:14:43.639 { 00:14:43.639 "dma_device_id": "system", 00:14:43.639 "dma_device_type": 1 00:14:43.639 }, 00:14:43.639 { 00:14:43.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.639 "dma_device_type": 2 00:14:43.639 } 00:14:43.639 ], 00:14:43.639 "driver_specific": { 00:14:43.639 "raid": { 00:14:43.639 "uuid": "c9cf0545-50ce-43dd-86b9-0056afe501ab", 00:14:43.639 "strip_size_kb": 64, 00:14:43.639 "state": "online", 00:14:43.639 "raid_level": "raid0", 00:14:43.639 "superblock": true, 00:14:43.639 "num_base_bdevs": 2, 00:14:43.639 "num_base_bdevs_discovered": 2, 00:14:43.639 "num_base_bdevs_operational": 2, 00:14:43.639 "base_bdevs_list": [ 00:14:43.639 { 00:14:43.639 "name": "pt1", 00:14:43.639 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:43.639 "is_configured": true, 00:14:43.639 "data_offset": 2048, 00:14:43.639 "data_size": 63488 00:14:43.639 }, 00:14:43.639 { 00:14:43.639 "name": "pt2", 00:14:43.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:43.639 "is_configured": true, 00:14:43.639 "data_offset": 2048, 00:14:43.639 "data_size": 63488 00:14:43.639 } 00:14:43.639 ] 00:14:43.639 } 00:14:43.639 } 00:14:43.639 }' 00:14:43.639 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:43.639 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:43.639 pt2' 00:14:43.639 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:43.639 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:43.639 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:43.898 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:43.898 "name": "pt1", 00:14:43.898 "aliases": [ 00:14:43.898 "00000000-0000-0000-0000-000000000001" 00:14:43.898 ], 00:14:43.898 "product_name": "passthru", 00:14:43.898 "block_size": 512, 00:14:43.898 "num_blocks": 65536, 00:14:43.898 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:43.898 "assigned_rate_limits": { 00:14:43.898 "rw_ios_per_sec": 0, 00:14:43.898 "rw_mbytes_per_sec": 0, 00:14:43.898 "r_mbytes_per_sec": 0, 00:14:43.898 "w_mbytes_per_sec": 0 00:14:43.898 }, 00:14:43.898 "claimed": true, 00:14:43.898 "claim_type": "exclusive_write", 00:14:43.898 "zoned": false, 00:14:43.898 "supported_io_types": { 00:14:43.898 "read": true, 00:14:43.898 "write": true, 00:14:43.898 "unmap": true, 00:14:43.898 "flush": true, 00:14:43.898 "reset": true, 00:14:43.898 "nvme_admin": false, 00:14:43.898 "nvme_io": false, 00:14:43.898 "nvme_io_md": false, 00:14:43.898 "write_zeroes": true, 00:14:43.898 "zcopy": true, 00:14:43.898 "get_zone_info": false, 00:14:43.898 "zone_management": false, 00:14:43.898 "zone_append": false, 00:14:43.898 "compare": false, 00:14:43.898 "compare_and_write": false, 00:14:43.898 "abort": true, 00:14:43.898 "seek_hole": false, 00:14:43.898 "seek_data": false, 00:14:43.898 "copy": true, 00:14:43.898 "nvme_iov_md": false 00:14:43.898 }, 00:14:43.898 "memory_domains": [ 00:14:43.898 { 00:14:43.898 "dma_device_id": "system", 00:14:43.898 "dma_device_type": 1 00:14:43.898 }, 00:14:43.898 { 00:14:43.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.898 "dma_device_type": 2 00:14:43.898 } 00:14:43.898 ], 00:14:43.898 "driver_specific": { 00:14:43.898 "passthru": { 00:14:43.898 "name": "pt1", 00:14:43.898 "base_bdev_name": "malloc1" 00:14:43.898 } 00:14:43.898 } 00:14:43.898 }' 00:14:43.898 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:43.898 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:43.898 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:43.898 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:43.898 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:43.898 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:43.898 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:43.898 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:43.898 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:43.898 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.158 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.158 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:44.158 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:44.158 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:44.158 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:44.417 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:44.417 "name": "pt2", 00:14:44.417 "aliases": [ 00:14:44.417 "00000000-0000-0000-0000-000000000002" 00:14:44.417 ], 00:14:44.417 "product_name": "passthru", 00:14:44.417 "block_size": 512, 00:14:44.417 "num_blocks": 65536, 00:14:44.417 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:44.417 "assigned_rate_limits": { 00:14:44.417 "rw_ios_per_sec": 0, 00:14:44.417 "rw_mbytes_per_sec": 0, 00:14:44.417 "r_mbytes_per_sec": 0, 00:14:44.417 "w_mbytes_per_sec": 0 00:14:44.417 }, 00:14:44.417 "claimed": true, 00:14:44.417 "claim_type": "exclusive_write", 00:14:44.417 "zoned": false, 00:14:44.417 "supported_io_types": { 00:14:44.417 "read": true, 00:14:44.417 "write": true, 00:14:44.417 "unmap": true, 00:14:44.417 "flush": true, 00:14:44.417 "reset": true, 00:14:44.417 "nvme_admin": false, 00:14:44.417 "nvme_io": false, 00:14:44.417 "nvme_io_md": false, 00:14:44.417 "write_zeroes": true, 00:14:44.417 "zcopy": true, 00:14:44.417 "get_zone_info": false, 00:14:44.417 "zone_management": false, 00:14:44.417 "zone_append": false, 00:14:44.417 "compare": false, 00:14:44.417 "compare_and_write": false, 00:14:44.417 "abort": true, 00:14:44.417 "seek_hole": false, 00:14:44.417 "seek_data": false, 00:14:44.417 "copy": true, 00:14:44.417 "nvme_iov_md": false 00:14:44.417 }, 00:14:44.417 "memory_domains": [ 00:14:44.417 { 00:14:44.417 "dma_device_id": "system", 00:14:44.417 "dma_device_type": 1 00:14:44.417 }, 00:14:44.417 { 00:14:44.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.417 "dma_device_type": 2 00:14:44.417 } 00:14:44.417 ], 00:14:44.417 "driver_specific": { 00:14:44.417 "passthru": { 00:14:44.417 "name": "pt2", 00:14:44.417 "base_bdev_name": "malloc2" 00:14:44.417 } 00:14:44.417 } 00:14:44.417 }' 00:14:44.417 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.417 04:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.417 04:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:44.417 04:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.417 04:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.417 04:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:44.417 04:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.417 04:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.417 04:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:44.417 04:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.417 04:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.675 04:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:44.675 04:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:44.675 04:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:14:44.675 [2024-07-23 04:09:53.350445] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.675 04:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' c9cf0545-50ce-43dd-86b9-0056afe501ab '!=' c9cf0545-50ce-43dd-86b9-0056afe501ab ']' 00:14:44.675 04:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:14:44.675 04:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:44.675 04:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:44.675 04:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2618502 00:14:44.675 04:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2618502 ']' 00:14:44.675 04:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2618502 00:14:44.675 04:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:14:44.675 04:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:44.675 04:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2618502 00:14:44.675 04:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:44.675 04:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:44.675 04:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2618502' 00:14:44.675 killing process with pid 2618502 00:14:44.675 04:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2618502 00:14:44.675 [2024-07-23 04:09:53.422229] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:44.675 [2024-07-23 04:09:53.422326] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.675 [2024-07-23 04:09:53.422384] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:44.675 04:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2618502 00:14:44.675 [2024-07-23 04:09:53.422403] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:14:44.933 [2024-07-23 04:09:53.621958] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:46.868 04:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:14:46.868 00:14:46.868 real 0m11.565s 00:14:46.868 user 0m18.917s 00:14:46.868 sys 0m1.992s 00:14:46.868 04:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:46.868 04:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.868 ************************************ 00:14:46.868 END TEST raid_superblock_test 00:14:46.868 ************************************ 00:14:46.868 04:09:55 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:46.868 04:09:55 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:14:46.868 04:09:55 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:46.868 04:09:55 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:46.868 04:09:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:46.868 ************************************ 00:14:46.868 START TEST raid_read_error_test 00:14:46.868 ************************************ 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 read 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.QdOkJxYcUt 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2620706 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2620706 /var/tmp/spdk-raid.sock 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2620706 ']' 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:46.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:46.868 04:09:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.868 [2024-07-23 04:09:55.424994] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:46.869 [2024-07-23 04:09:55.425100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2620706 ] 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:46.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:46.869 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:46.869 [2024-07-23 04:09:55.621564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.435 [2024-07-23 04:09:55.913552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.694 [2024-07-23 04:09:56.254634] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.694 [2024-07-23 04:09:56.254675] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.694 04:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:47.694 04:09:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:47.694 04:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:47.694 04:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:47.952 BaseBdev1_malloc 00:14:47.952 04:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:48.211 true 00:14:48.211 04:09:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:48.469 [2024-07-23 04:09:57.161897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:48.469 [2024-07-23 04:09:57.161955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.469 [2024-07-23 04:09:57.161982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:14:48.469 [2024-07-23 04:09:57.162005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.469 [2024-07-23 04:09:57.164804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.469 [2024-07-23 04:09:57.164842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:48.469 BaseBdev1 00:14:48.469 04:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:48.469 04:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:48.728 BaseBdev2_malloc 00:14:48.728 04:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:48.987 true 00:14:48.987 04:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:49.247 [2024-07-23 04:09:57.898069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:49.247 [2024-07-23 04:09:57.898127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.247 [2024-07-23 04:09:57.898165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:14:49.247 [2024-07-23 04:09:57.898187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.247 [2024-07-23 04:09:57.900939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.247 [2024-07-23 04:09:57.900977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:49.247 BaseBdev2 00:14:49.247 04:09:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:14:49.506 [2024-07-23 04:09:58.122723] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.506 [2024-07-23 04:09:58.125057] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:49.506 [2024-07-23 04:09:58.125318] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040e80 00:14:49.506 [2024-07-23 04:09:58.125341] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:49.506 [2024-07-23 04:09:58.125676] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:14:49.506 [2024-07-23 04:09:58.125930] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040e80 00:14:49.506 [2024-07-23 04:09:58.125948] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040e80 00:14:49.506 [2024-07-23 04:09:58.126159] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.506 04:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:49.506 04:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:49.506 04:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:49.506 04:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:49.506 04:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:49.506 04:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:49.506 04:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:49.506 04:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:49.506 04:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:49.506 04:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:49.506 04:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.506 04:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.766 04:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:49.766 "name": "raid_bdev1", 00:14:49.766 "uuid": "536d0f84-0401-474b-b7e9-baf6a68ffa59", 00:14:49.766 "strip_size_kb": 64, 00:14:49.766 "state": "online", 00:14:49.766 "raid_level": "raid0", 00:14:49.766 "superblock": true, 00:14:49.766 "num_base_bdevs": 2, 00:14:49.766 "num_base_bdevs_discovered": 2, 00:14:49.766 "num_base_bdevs_operational": 2, 00:14:49.766 "base_bdevs_list": [ 00:14:49.766 { 00:14:49.766 "name": "BaseBdev1", 00:14:49.766 "uuid": "3802476b-3c57-518d-8931-53f68edf74b1", 00:14:49.766 "is_configured": true, 00:14:49.766 "data_offset": 2048, 00:14:49.766 "data_size": 63488 00:14:49.766 }, 00:14:49.766 { 00:14:49.766 "name": "BaseBdev2", 00:14:49.766 "uuid": "8bb3e179-dcdc-5089-84f7-6ed487e6d892", 00:14:49.766 "is_configured": true, 00:14:49.766 "data_offset": 2048, 00:14:49.766 "data_size": 63488 00:14:49.766 } 00:14:49.766 ] 00:14:49.766 }' 00:14:49.766 04:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:49.766 04:09:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.334 04:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:50.334 04:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:50.334 [2024-07-23 04:09:59.031177] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:14:51.273 04:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:51.532 04:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:51.532 04:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:51.532 04:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:14:51.532 04:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:51.532 04:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:51.532 04:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:51.532 04:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:51.532 04:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:51.532 04:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:51.532 04:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:51.532 04:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:51.532 04:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:51.532 04:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:51.532 04:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.532 04:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.792 04:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:51.792 "name": "raid_bdev1", 00:14:51.792 "uuid": "536d0f84-0401-474b-b7e9-baf6a68ffa59", 00:14:51.792 "strip_size_kb": 64, 00:14:51.792 "state": "online", 00:14:51.792 "raid_level": "raid0", 00:14:51.792 "superblock": true, 00:14:51.792 "num_base_bdevs": 2, 00:14:51.792 "num_base_bdevs_discovered": 2, 00:14:51.792 "num_base_bdevs_operational": 2, 00:14:51.792 "base_bdevs_list": [ 00:14:51.792 { 00:14:51.792 "name": "BaseBdev1", 00:14:51.792 "uuid": "3802476b-3c57-518d-8931-53f68edf74b1", 00:14:51.792 "is_configured": true, 00:14:51.792 "data_offset": 2048, 00:14:51.792 "data_size": 63488 00:14:51.792 }, 00:14:51.792 { 00:14:51.792 "name": "BaseBdev2", 00:14:51.792 "uuid": "8bb3e179-dcdc-5089-84f7-6ed487e6d892", 00:14:51.792 "is_configured": true, 00:14:51.792 "data_offset": 2048, 00:14:51.792 "data_size": 63488 00:14:51.792 } 00:14:51.792 ] 00:14:51.792 }' 00:14:51.792 04:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:51.792 04:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.361 04:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:52.621 [2024-07-23 04:10:01.146703] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:52.621 [2024-07-23 04:10:01.146743] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:52.621 [2024-07-23 04:10:01.150037] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.621 [2024-07-23 04:10:01.150088] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.621 [2024-07-23 04:10:01.150132] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:52.621 [2024-07-23 04:10:01.150164] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state offline 00:14:52.621 0 00:14:52.621 04:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2620706 00:14:52.621 04:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2620706 ']' 00:14:52.621 04:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2620706 00:14:52.621 04:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:14:52.621 04:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:52.621 04:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2620706 00:14:52.621 04:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:52.621 04:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:52.621 04:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2620706' 00:14:52.621 killing process with pid 2620706 00:14:52.621 04:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2620706 00:14:52.621 [2024-07-23 04:10:01.223910] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:52.621 04:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2620706 00:14:52.621 [2024-07-23 04:10:01.330455] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:54.527 04:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.QdOkJxYcUt 00:14:54.527 04:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:54.527 04:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:54.527 04:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:14:54.527 04:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:14:54.527 04:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:54.527 04:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:54.527 04:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:14:54.527 00:14:54.527 real 0m7.843s 00:14:54.527 user 0m10.945s 00:14:54.527 sys 0m1.163s 00:14:54.527 04:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:54.527 04:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.527 ************************************ 00:14:54.527 END TEST raid_read_error_test 00:14:54.527 ************************************ 00:14:54.527 04:10:03 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:54.527 04:10:03 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:14:54.527 04:10:03 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:54.527 04:10:03 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:54.527 04:10:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:54.527 ************************************ 00:14:54.527 START TEST raid_write_error_test 00:14:54.527 ************************************ 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 write 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.I43L2DYwfU 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2622125 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2622125 /var/tmp/spdk-raid.sock 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2622125 ']' 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:54.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:54.527 04:10:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.786 [2024-07-23 04:10:03.376078] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:54.786 [2024-07-23 04:10:03.376207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2622125 ] 00:14:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.786 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:54.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:54.787 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:55.046 [2024-07-23 04:10:03.602730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.305 [2024-07-23 04:10:03.862556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.565 [2024-07-23 04:10:04.203925] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.565 [2024-07-23 04:10:04.203960] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.825 04:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:55.825 04:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:55.825 04:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:55.825 04:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:56.084 BaseBdev1_malloc 00:14:56.084 04:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:56.343 true 00:14:56.343 04:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:56.343 [2024-07-23 04:10:05.116306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:56.343 [2024-07-23 04:10:05.116369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.343 [2024-07-23 04:10:05.116400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:14:56.343 [2024-07-23 04:10:05.116422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.343 [2024-07-23 04:10:05.119239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.343 [2024-07-23 04:10:05.119277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:56.343 BaseBdev1 00:14:56.603 04:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:56.603 04:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:56.862 BaseBdev2_malloc 00:14:56.862 04:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:56.862 true 00:14:56.862 04:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:57.121 [2024-07-23 04:10:05.836374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:57.121 [2024-07-23 04:10:05.836432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.121 [2024-07-23 04:10:05.836458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:14:57.121 [2024-07-23 04:10:05.836479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.121 [2024-07-23 04:10:05.839290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.121 [2024-07-23 04:10:05.839327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:57.121 BaseBdev2 00:14:57.121 04:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:14:57.381 [2024-07-23 04:10:06.061036] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:57.381 [2024-07-23 04:10:06.063335] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.381 [2024-07-23 04:10:06.063580] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040e80 00:14:57.381 [2024-07-23 04:10:06.063602] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:57.382 [2024-07-23 04:10:06.063929] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:14:57.382 [2024-07-23 04:10:06.064185] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040e80 00:14:57.382 [2024-07-23 04:10:06.064204] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040e80 00:14:57.382 [2024-07-23 04:10:06.064403] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.382 04:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:57.382 04:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:57.382 04:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:57.382 04:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:57.382 04:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:57.382 04:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:57.382 04:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:57.382 04:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:57.382 04:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:57.382 04:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:57.382 04:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.382 04:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.641 04:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:57.641 "name": "raid_bdev1", 00:14:57.641 "uuid": "75a25191-73a7-44a4-b332-8d7a3ffc3fed", 00:14:57.641 "strip_size_kb": 64, 00:14:57.641 "state": "online", 00:14:57.641 "raid_level": "raid0", 00:14:57.641 "superblock": true, 00:14:57.641 "num_base_bdevs": 2, 00:14:57.641 "num_base_bdevs_discovered": 2, 00:14:57.641 "num_base_bdevs_operational": 2, 00:14:57.641 "base_bdevs_list": [ 00:14:57.641 { 00:14:57.641 "name": "BaseBdev1", 00:14:57.641 "uuid": "b4706b47-37e4-5168-8838-a8ec6595d2ab", 00:14:57.641 "is_configured": true, 00:14:57.641 "data_offset": 2048, 00:14:57.641 "data_size": 63488 00:14:57.641 }, 00:14:57.641 { 00:14:57.641 "name": "BaseBdev2", 00:14:57.641 "uuid": "3f2328c4-ba01-510b-91ec-7ddc8cc9cf37", 00:14:57.641 "is_configured": true, 00:14:57.641 "data_offset": 2048, 00:14:57.641 "data_size": 63488 00:14:57.641 } 00:14:57.641 ] 00:14:57.641 }' 00:14:57.641 04:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:57.641 04:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.209 04:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:58.209 04:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:58.209 [2024-07-23 04:10:06.981595] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:14:59.148 04:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:59.407 04:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:59.407 04:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:59.407 04:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:14:59.407 04:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:59.407 04:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:59.408 04:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:59.408 04:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:59.408 04:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:59.408 04:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:59.408 04:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:59.408 04:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:59.408 04:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:59.408 04:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:59.408 04:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.408 04:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.667 04:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:59.667 "name": "raid_bdev1", 00:14:59.667 "uuid": "75a25191-73a7-44a4-b332-8d7a3ffc3fed", 00:14:59.667 "strip_size_kb": 64, 00:14:59.667 "state": "online", 00:14:59.667 "raid_level": "raid0", 00:14:59.667 "superblock": true, 00:14:59.667 "num_base_bdevs": 2, 00:14:59.667 "num_base_bdevs_discovered": 2, 00:14:59.667 "num_base_bdevs_operational": 2, 00:14:59.667 "base_bdevs_list": [ 00:14:59.667 { 00:14:59.667 "name": "BaseBdev1", 00:14:59.667 "uuid": "b4706b47-37e4-5168-8838-a8ec6595d2ab", 00:14:59.667 "is_configured": true, 00:14:59.667 "data_offset": 2048, 00:14:59.667 "data_size": 63488 00:14:59.667 }, 00:14:59.667 { 00:14:59.667 "name": "BaseBdev2", 00:14:59.667 "uuid": "3f2328c4-ba01-510b-91ec-7ddc8cc9cf37", 00:14:59.667 "is_configured": true, 00:14:59.667 "data_offset": 2048, 00:14:59.667 "data_size": 63488 00:14:59.667 } 00:14:59.667 ] 00:14:59.667 }' 00:14:59.667 04:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:59.667 04:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.235 04:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:00.495 [2024-07-23 04:10:09.080383] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:00.495 [2024-07-23 04:10:09.080430] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:00.495 [2024-07-23 04:10:09.083691] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.495 [2024-07-23 04:10:09.083740] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.495 [2024-07-23 04:10:09.083781] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:00.495 [2024-07-23 04:10:09.083800] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state offline 00:15:00.495 0 00:15:00.495 04:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2622125 00:15:00.495 04:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2622125 ']' 00:15:00.495 04:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2622125 00:15:00.495 04:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:15:00.495 04:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:00.495 04:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2622125 00:15:00.495 04:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:00.495 04:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:00.495 04:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2622125' 00:15:00.495 killing process with pid 2622125 00:15:00.495 04:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2622125 00:15:00.495 [2024-07-23 04:10:09.154965] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:00.495 04:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2622125 00:15:00.495 [2024-07-23 04:10:09.258125] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:02.401 04:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.I43L2DYwfU 00:15:02.401 04:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:02.401 04:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:02.401 04:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:15:02.401 04:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:15:02.401 04:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:02.401 04:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:02.401 04:10:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:15:02.401 00:15:02.401 real 0m7.820s 00:15:02.401 user 0m10.849s 00:15:02.401 sys 0m1.196s 00:15:02.401 04:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:02.401 04:10:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.401 ************************************ 00:15:02.401 END TEST raid_write_error_test 00:15:02.401 ************************************ 00:15:02.401 04:10:11 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:02.401 04:10:11 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:15:02.401 04:10:11 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:15:02.401 04:10:11 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:02.401 04:10:11 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:02.401 04:10:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:02.401 ************************************ 00:15:02.401 START TEST raid_state_function_test 00:15:02.401 ************************************ 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 false 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:02.401 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:02.402 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2623543 00:15:02.402 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2623543' 00:15:02.402 Process raid pid: 2623543 00:15:02.402 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:02.402 04:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2623543 /var/tmp/spdk-raid.sock 00:15:02.402 04:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2623543 ']' 00:15:02.402 04:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:02.402 04:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.402 04:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:02.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:02.402 04:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.402 04:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.664 [2024-07-23 04:10:11.269586] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:02.664 [2024-07-23 04:10:11.269702] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:02.664 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:02.664 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:02.923 [2024-07-23 04:10:11.497770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.182 [2024-07-23 04:10:11.791073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.441 [2024-07-23 04:10:12.146853] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.441 [2024-07-23 04:10:12.146889] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.700 04:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:03.700 04:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:15:03.700 04:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:03.959 [2024-07-23 04:10:12.550531] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:03.959 [2024-07-23 04:10:12.550587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:03.959 [2024-07-23 04:10:12.550602] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:03.959 [2024-07-23 04:10:12.550618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:03.959 04:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:03.959 04:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:03.959 04:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:03.959 04:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:03.959 04:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:03.959 04:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:03.959 04:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:03.959 04:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:03.959 04:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:03.959 04:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:03.959 04:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.959 04:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.959 04:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:03.959 "name": "Existed_Raid", 00:15:03.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.959 "strip_size_kb": 64, 00:15:03.959 "state": "configuring", 00:15:03.959 "raid_level": "concat", 00:15:03.959 "superblock": false, 00:15:03.959 "num_base_bdevs": 2, 00:15:03.959 "num_base_bdevs_discovered": 0, 00:15:03.959 "num_base_bdevs_operational": 2, 00:15:03.959 "base_bdevs_list": [ 00:15:03.959 { 00:15:03.959 "name": "BaseBdev1", 00:15:03.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.959 "is_configured": false, 00:15:03.959 "data_offset": 0, 00:15:03.959 "data_size": 0 00:15:03.959 }, 00:15:03.959 { 00:15:03.959 "name": "BaseBdev2", 00:15:03.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.959 "is_configured": false, 00:15:03.959 "data_offset": 0, 00:15:03.959 "data_size": 0 00:15:03.959 } 00:15:03.959 ] 00:15:03.959 }' 00:15:03.959 04:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:03.959 04:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.527 04:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:04.786 [2024-07-23 04:10:13.513019] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:04.786 [2024-07-23 04:10:13.513060] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:15:04.786 04:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:05.045 [2024-07-23 04:10:13.737661] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:05.045 [2024-07-23 04:10:13.737706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:05.045 [2024-07-23 04:10:13.737720] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:05.045 [2024-07-23 04:10:13.737736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:05.045 04:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:05.303 [2024-07-23 04:10:14.008295] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:05.303 BaseBdev1 00:15:05.303 04:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:05.303 04:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:05.303 04:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:05.303 04:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:05.303 04:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:05.303 04:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:05.303 04:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:05.562 04:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:05.822 [ 00:15:05.822 { 00:15:05.822 "name": "BaseBdev1", 00:15:05.822 "aliases": [ 00:15:05.822 "52d9f1b1-e635-4f4d-9451-a96e282d9f53" 00:15:05.822 ], 00:15:05.822 "product_name": "Malloc disk", 00:15:05.822 "block_size": 512, 00:15:05.822 "num_blocks": 65536, 00:15:05.822 "uuid": "52d9f1b1-e635-4f4d-9451-a96e282d9f53", 00:15:05.822 "assigned_rate_limits": { 00:15:05.822 "rw_ios_per_sec": 0, 00:15:05.822 "rw_mbytes_per_sec": 0, 00:15:05.822 "r_mbytes_per_sec": 0, 00:15:05.822 "w_mbytes_per_sec": 0 00:15:05.822 }, 00:15:05.822 "claimed": true, 00:15:05.822 "claim_type": "exclusive_write", 00:15:05.822 "zoned": false, 00:15:05.822 "supported_io_types": { 00:15:05.822 "read": true, 00:15:05.822 "write": true, 00:15:05.822 "unmap": true, 00:15:05.822 "flush": true, 00:15:05.822 "reset": true, 00:15:05.822 "nvme_admin": false, 00:15:05.822 "nvme_io": false, 00:15:05.822 "nvme_io_md": false, 00:15:05.822 "write_zeroes": true, 00:15:05.822 "zcopy": true, 00:15:05.822 "get_zone_info": false, 00:15:05.822 "zone_management": false, 00:15:05.822 "zone_append": false, 00:15:05.822 "compare": false, 00:15:05.822 "compare_and_write": false, 00:15:05.822 "abort": true, 00:15:05.822 "seek_hole": false, 00:15:05.822 "seek_data": false, 00:15:05.822 "copy": true, 00:15:05.822 "nvme_iov_md": false 00:15:05.822 }, 00:15:05.822 "memory_domains": [ 00:15:05.822 { 00:15:05.822 "dma_device_id": "system", 00:15:05.822 "dma_device_type": 1 00:15:05.822 }, 00:15:05.822 { 00:15:05.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.822 "dma_device_type": 2 00:15:05.822 } 00:15:05.822 ], 00:15:05.822 "driver_specific": {} 00:15:05.822 } 00:15:05.822 ] 00:15:05.822 04:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:05.822 04:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:05.822 04:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:05.822 04:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:05.822 04:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:05.822 04:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:05.822 04:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:05.822 04:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:05.822 04:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:05.822 04:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:05.822 04:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:05.822 04:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.822 04:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.081 04:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:06.081 "name": "Existed_Raid", 00:15:06.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.081 "strip_size_kb": 64, 00:15:06.081 "state": "configuring", 00:15:06.081 "raid_level": "concat", 00:15:06.081 "superblock": false, 00:15:06.081 "num_base_bdevs": 2, 00:15:06.081 "num_base_bdevs_discovered": 1, 00:15:06.081 "num_base_bdevs_operational": 2, 00:15:06.081 "base_bdevs_list": [ 00:15:06.081 { 00:15:06.081 "name": "BaseBdev1", 00:15:06.081 "uuid": "52d9f1b1-e635-4f4d-9451-a96e282d9f53", 00:15:06.081 "is_configured": true, 00:15:06.081 "data_offset": 0, 00:15:06.081 "data_size": 65536 00:15:06.081 }, 00:15:06.081 { 00:15:06.081 "name": "BaseBdev2", 00:15:06.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.081 "is_configured": false, 00:15:06.081 "data_offset": 0, 00:15:06.081 "data_size": 0 00:15:06.081 } 00:15:06.081 ] 00:15:06.081 }' 00:15:06.081 04:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:06.081 04:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.648 04:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:06.907 [2024-07-23 04:10:15.476299] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:06.907 [2024-07-23 04:10:15.476355] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:15:06.907 04:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:07.166 [2024-07-23 04:10:15.705006] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:07.166 [2024-07-23 04:10:15.707309] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:07.166 [2024-07-23 04:10:15.707354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:07.166 04:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:07.166 04:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:07.166 04:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:07.166 04:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:07.166 04:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:07.166 04:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:07.166 04:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:07.166 04:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:07.166 04:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:07.166 04:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:07.166 04:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:07.166 04:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:07.166 04:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.166 04:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.434 04:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:07.434 "name": "Existed_Raid", 00:15:07.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.434 "strip_size_kb": 64, 00:15:07.434 "state": "configuring", 00:15:07.434 "raid_level": "concat", 00:15:07.434 "superblock": false, 00:15:07.434 "num_base_bdevs": 2, 00:15:07.434 "num_base_bdevs_discovered": 1, 00:15:07.434 "num_base_bdevs_operational": 2, 00:15:07.434 "base_bdevs_list": [ 00:15:07.434 { 00:15:07.434 "name": "BaseBdev1", 00:15:07.434 "uuid": "52d9f1b1-e635-4f4d-9451-a96e282d9f53", 00:15:07.434 "is_configured": true, 00:15:07.434 "data_offset": 0, 00:15:07.434 "data_size": 65536 00:15:07.434 }, 00:15:07.434 { 00:15:07.434 "name": "BaseBdev2", 00:15:07.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.434 "is_configured": false, 00:15:07.434 "data_offset": 0, 00:15:07.434 "data_size": 0 00:15:07.434 } 00:15:07.434 ] 00:15:07.434 }' 00:15:07.434 04:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:07.434 04:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.003 04:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:08.261 [2024-07-23 04:10:16.790804] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:08.261 [2024-07-23 04:10:16.790852] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:15:08.261 [2024-07-23 04:10:16.790866] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:08.261 [2024-07-23 04:10:16.791212] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:15:08.261 [2024-07-23 04:10:16.791461] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:15:08.261 [2024-07-23 04:10:16.791479] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:15:08.261 [2024-07-23 04:10:16.791779] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.261 BaseBdev2 00:15:08.261 04:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:08.261 04:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:08.261 04:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:08.261 04:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:08.261 04:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:08.261 04:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:08.261 04:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:08.261 04:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:08.520 [ 00:15:08.520 { 00:15:08.520 "name": "BaseBdev2", 00:15:08.520 "aliases": [ 00:15:08.520 "0a730614-fd87-489a-a075-b0b635a29c5c" 00:15:08.520 ], 00:15:08.520 "product_name": "Malloc disk", 00:15:08.520 "block_size": 512, 00:15:08.520 "num_blocks": 65536, 00:15:08.520 "uuid": "0a730614-fd87-489a-a075-b0b635a29c5c", 00:15:08.520 "assigned_rate_limits": { 00:15:08.520 "rw_ios_per_sec": 0, 00:15:08.520 "rw_mbytes_per_sec": 0, 00:15:08.520 "r_mbytes_per_sec": 0, 00:15:08.520 "w_mbytes_per_sec": 0 00:15:08.520 }, 00:15:08.520 "claimed": true, 00:15:08.520 "claim_type": "exclusive_write", 00:15:08.520 "zoned": false, 00:15:08.520 "supported_io_types": { 00:15:08.520 "read": true, 00:15:08.520 "write": true, 00:15:08.520 "unmap": true, 00:15:08.520 "flush": true, 00:15:08.520 "reset": true, 00:15:08.520 "nvme_admin": false, 00:15:08.520 "nvme_io": false, 00:15:08.520 "nvme_io_md": false, 00:15:08.520 "write_zeroes": true, 00:15:08.520 "zcopy": true, 00:15:08.520 "get_zone_info": false, 00:15:08.520 "zone_management": false, 00:15:08.520 "zone_append": false, 00:15:08.520 "compare": false, 00:15:08.520 "compare_and_write": false, 00:15:08.520 "abort": true, 00:15:08.520 "seek_hole": false, 00:15:08.520 "seek_data": false, 00:15:08.520 "copy": true, 00:15:08.520 "nvme_iov_md": false 00:15:08.520 }, 00:15:08.520 "memory_domains": [ 00:15:08.520 { 00:15:08.520 "dma_device_id": "system", 00:15:08.520 "dma_device_type": 1 00:15:08.520 }, 00:15:08.520 { 00:15:08.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.520 "dma_device_type": 2 00:15:08.520 } 00:15:08.520 ], 00:15:08.520 "driver_specific": {} 00:15:08.520 } 00:15:08.520 ] 00:15:08.520 04:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:08.520 04:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:08.520 04:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:08.520 04:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:08.520 04:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:08.520 04:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:08.520 04:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:08.520 04:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:08.520 04:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:08.520 04:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:08.520 04:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:08.520 04:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:08.520 04:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:08.520 04:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.520 04:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.779 04:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:08.779 "name": "Existed_Raid", 00:15:08.779 "uuid": "bd3f3322-8979-4d2b-a8ed-39c1ced7107f", 00:15:08.779 "strip_size_kb": 64, 00:15:08.779 "state": "online", 00:15:08.779 "raid_level": "concat", 00:15:08.779 "superblock": false, 00:15:08.779 "num_base_bdevs": 2, 00:15:08.779 "num_base_bdevs_discovered": 2, 00:15:08.779 "num_base_bdevs_operational": 2, 00:15:08.779 "base_bdevs_list": [ 00:15:08.779 { 00:15:08.779 "name": "BaseBdev1", 00:15:08.779 "uuid": "52d9f1b1-e635-4f4d-9451-a96e282d9f53", 00:15:08.779 "is_configured": true, 00:15:08.779 "data_offset": 0, 00:15:08.779 "data_size": 65536 00:15:08.779 }, 00:15:08.779 { 00:15:08.779 "name": "BaseBdev2", 00:15:08.779 "uuid": "0a730614-fd87-489a-a075-b0b635a29c5c", 00:15:08.779 "is_configured": true, 00:15:08.779 "data_offset": 0, 00:15:08.779 "data_size": 65536 00:15:08.779 } 00:15:08.779 ] 00:15:08.779 }' 00:15:08.779 04:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:08.779 04:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.347 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:09.347 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:09.347 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:09.347 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:09.347 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:09.347 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:09.347 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:09.347 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:09.606 [2024-07-23 04:10:18.291267] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.606 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:09.606 "name": "Existed_Raid", 00:15:09.606 "aliases": [ 00:15:09.606 "bd3f3322-8979-4d2b-a8ed-39c1ced7107f" 00:15:09.606 ], 00:15:09.606 "product_name": "Raid Volume", 00:15:09.606 "block_size": 512, 00:15:09.606 "num_blocks": 131072, 00:15:09.606 "uuid": "bd3f3322-8979-4d2b-a8ed-39c1ced7107f", 00:15:09.606 "assigned_rate_limits": { 00:15:09.606 "rw_ios_per_sec": 0, 00:15:09.606 "rw_mbytes_per_sec": 0, 00:15:09.606 "r_mbytes_per_sec": 0, 00:15:09.606 "w_mbytes_per_sec": 0 00:15:09.606 }, 00:15:09.606 "claimed": false, 00:15:09.606 "zoned": false, 00:15:09.606 "supported_io_types": { 00:15:09.606 "read": true, 00:15:09.606 "write": true, 00:15:09.606 "unmap": true, 00:15:09.606 "flush": true, 00:15:09.606 "reset": true, 00:15:09.606 "nvme_admin": false, 00:15:09.606 "nvme_io": false, 00:15:09.606 "nvme_io_md": false, 00:15:09.606 "write_zeroes": true, 00:15:09.606 "zcopy": false, 00:15:09.606 "get_zone_info": false, 00:15:09.606 "zone_management": false, 00:15:09.606 "zone_append": false, 00:15:09.606 "compare": false, 00:15:09.606 "compare_and_write": false, 00:15:09.606 "abort": false, 00:15:09.606 "seek_hole": false, 00:15:09.606 "seek_data": false, 00:15:09.606 "copy": false, 00:15:09.606 "nvme_iov_md": false 00:15:09.606 }, 00:15:09.606 "memory_domains": [ 00:15:09.606 { 00:15:09.606 "dma_device_id": "system", 00:15:09.606 "dma_device_type": 1 00:15:09.606 }, 00:15:09.606 { 00:15:09.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.606 "dma_device_type": 2 00:15:09.606 }, 00:15:09.606 { 00:15:09.606 "dma_device_id": "system", 00:15:09.606 "dma_device_type": 1 00:15:09.606 }, 00:15:09.606 { 00:15:09.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.606 "dma_device_type": 2 00:15:09.606 } 00:15:09.606 ], 00:15:09.606 "driver_specific": { 00:15:09.607 "raid": { 00:15:09.607 "uuid": "bd3f3322-8979-4d2b-a8ed-39c1ced7107f", 00:15:09.607 "strip_size_kb": 64, 00:15:09.607 "state": "online", 00:15:09.607 "raid_level": "concat", 00:15:09.607 "superblock": false, 00:15:09.607 "num_base_bdevs": 2, 00:15:09.607 "num_base_bdevs_discovered": 2, 00:15:09.607 "num_base_bdevs_operational": 2, 00:15:09.607 "base_bdevs_list": [ 00:15:09.607 { 00:15:09.607 "name": "BaseBdev1", 00:15:09.607 "uuid": "52d9f1b1-e635-4f4d-9451-a96e282d9f53", 00:15:09.607 "is_configured": true, 00:15:09.607 "data_offset": 0, 00:15:09.607 "data_size": 65536 00:15:09.607 }, 00:15:09.607 { 00:15:09.607 "name": "BaseBdev2", 00:15:09.607 "uuid": "0a730614-fd87-489a-a075-b0b635a29c5c", 00:15:09.607 "is_configured": true, 00:15:09.607 "data_offset": 0, 00:15:09.607 "data_size": 65536 00:15:09.607 } 00:15:09.607 ] 00:15:09.607 } 00:15:09.607 } 00:15:09.607 }' 00:15:09.607 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:09.607 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:09.607 BaseBdev2' 00:15:09.607 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:09.607 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:09.607 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:09.865 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:09.865 "name": "BaseBdev1", 00:15:09.865 "aliases": [ 00:15:09.865 "52d9f1b1-e635-4f4d-9451-a96e282d9f53" 00:15:09.865 ], 00:15:09.865 "product_name": "Malloc disk", 00:15:09.865 "block_size": 512, 00:15:09.865 "num_blocks": 65536, 00:15:09.865 "uuid": "52d9f1b1-e635-4f4d-9451-a96e282d9f53", 00:15:09.865 "assigned_rate_limits": { 00:15:09.865 "rw_ios_per_sec": 0, 00:15:09.865 "rw_mbytes_per_sec": 0, 00:15:09.865 "r_mbytes_per_sec": 0, 00:15:09.865 "w_mbytes_per_sec": 0 00:15:09.865 }, 00:15:09.865 "claimed": true, 00:15:09.865 "claim_type": "exclusive_write", 00:15:09.865 "zoned": false, 00:15:09.865 "supported_io_types": { 00:15:09.865 "read": true, 00:15:09.865 "write": true, 00:15:09.865 "unmap": true, 00:15:09.865 "flush": true, 00:15:09.865 "reset": true, 00:15:09.865 "nvme_admin": false, 00:15:09.865 "nvme_io": false, 00:15:09.865 "nvme_io_md": false, 00:15:09.865 "write_zeroes": true, 00:15:09.865 "zcopy": true, 00:15:09.865 "get_zone_info": false, 00:15:09.865 "zone_management": false, 00:15:09.865 "zone_append": false, 00:15:09.865 "compare": false, 00:15:09.865 "compare_and_write": false, 00:15:09.865 "abort": true, 00:15:09.865 "seek_hole": false, 00:15:09.865 "seek_data": false, 00:15:09.865 "copy": true, 00:15:09.865 "nvme_iov_md": false 00:15:09.865 }, 00:15:09.865 "memory_domains": [ 00:15:09.865 { 00:15:09.865 "dma_device_id": "system", 00:15:09.865 "dma_device_type": 1 00:15:09.865 }, 00:15:09.865 { 00:15:09.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.865 "dma_device_type": 2 00:15:09.865 } 00:15:09.865 ], 00:15:09.865 "driver_specific": {} 00:15:09.865 }' 00:15:09.865 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:09.865 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:10.124 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:10.124 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:10.124 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:10.124 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:10.124 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:10.124 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:10.124 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:10.124 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:10.124 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:10.382 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:10.382 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:10.382 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:10.382 04:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:10.382 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:10.382 "name": "BaseBdev2", 00:15:10.382 "aliases": [ 00:15:10.382 "0a730614-fd87-489a-a075-b0b635a29c5c" 00:15:10.382 ], 00:15:10.382 "product_name": "Malloc disk", 00:15:10.382 "block_size": 512, 00:15:10.382 "num_blocks": 65536, 00:15:10.382 "uuid": "0a730614-fd87-489a-a075-b0b635a29c5c", 00:15:10.382 "assigned_rate_limits": { 00:15:10.382 "rw_ios_per_sec": 0, 00:15:10.382 "rw_mbytes_per_sec": 0, 00:15:10.382 "r_mbytes_per_sec": 0, 00:15:10.382 "w_mbytes_per_sec": 0 00:15:10.382 }, 00:15:10.382 "claimed": true, 00:15:10.382 "claim_type": "exclusive_write", 00:15:10.382 "zoned": false, 00:15:10.382 "supported_io_types": { 00:15:10.382 "read": true, 00:15:10.382 "write": true, 00:15:10.382 "unmap": true, 00:15:10.382 "flush": true, 00:15:10.382 "reset": true, 00:15:10.382 "nvme_admin": false, 00:15:10.382 "nvme_io": false, 00:15:10.382 "nvme_io_md": false, 00:15:10.382 "write_zeroes": true, 00:15:10.382 "zcopy": true, 00:15:10.382 "get_zone_info": false, 00:15:10.382 "zone_management": false, 00:15:10.382 "zone_append": false, 00:15:10.382 "compare": false, 00:15:10.382 "compare_and_write": false, 00:15:10.382 "abort": true, 00:15:10.382 "seek_hole": false, 00:15:10.382 "seek_data": false, 00:15:10.382 "copy": true, 00:15:10.382 "nvme_iov_md": false 00:15:10.382 }, 00:15:10.382 "memory_domains": [ 00:15:10.382 { 00:15:10.382 "dma_device_id": "system", 00:15:10.382 "dma_device_type": 1 00:15:10.382 }, 00:15:10.382 { 00:15:10.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.382 "dma_device_type": 2 00:15:10.382 } 00:15:10.382 ], 00:15:10.382 "driver_specific": {} 00:15:10.382 }' 00:15:10.382 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:10.640 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:10.640 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:10.640 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:10.640 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:10.640 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:10.640 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:10.640 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:10.640 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:10.640 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:10.897 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:10.897 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:10.897 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:11.155 [2024-07-23 04:10:19.714835] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:11.155 [2024-07-23 04:10:19.714870] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:11.155 [2024-07-23 04:10:19.714928] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.155 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:11.155 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:11.155 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:11.155 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:11.155 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:11.155 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:11.155 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:11.155 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:11.155 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:11.155 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:11.155 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:11.155 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:11.155 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:11.155 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:11.155 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:11.155 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.155 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.413 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:11.413 "name": "Existed_Raid", 00:15:11.413 "uuid": "bd3f3322-8979-4d2b-a8ed-39c1ced7107f", 00:15:11.413 "strip_size_kb": 64, 00:15:11.413 "state": "offline", 00:15:11.413 "raid_level": "concat", 00:15:11.413 "superblock": false, 00:15:11.413 "num_base_bdevs": 2, 00:15:11.413 "num_base_bdevs_discovered": 1, 00:15:11.413 "num_base_bdevs_operational": 1, 00:15:11.413 "base_bdevs_list": [ 00:15:11.413 { 00:15:11.413 "name": null, 00:15:11.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.413 "is_configured": false, 00:15:11.413 "data_offset": 0, 00:15:11.413 "data_size": 65536 00:15:11.413 }, 00:15:11.413 { 00:15:11.413 "name": "BaseBdev2", 00:15:11.413 "uuid": "0a730614-fd87-489a-a075-b0b635a29c5c", 00:15:11.413 "is_configured": true, 00:15:11.413 "data_offset": 0, 00:15:11.413 "data_size": 65536 00:15:11.413 } 00:15:11.413 ] 00:15:11.413 }' 00:15:11.413 04:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:11.413 04:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.980 04:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:11.980 04:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:11.980 04:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.980 04:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:11.980 04:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:11.980 04:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:11.980 04:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:12.238 [2024-07-23 04:10:20.927860] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:12.238 [2024-07-23 04:10:20.927917] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:15:12.497 04:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:12.497 04:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:12.497 04:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.497 04:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:12.755 04:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:12.755 04:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:12.755 04:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:12.755 04:10:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2623543 00:15:12.755 04:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2623543 ']' 00:15:12.755 04:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2623543 00:15:12.755 04:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:15:12.755 04:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:12.755 04:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2623543 00:15:12.755 04:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:12.755 04:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:12.755 04:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2623543' 00:15:12.755 killing process with pid 2623543 00:15:12.755 04:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2623543 00:15:12.755 [2024-07-23 04:10:21.378048] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:12.755 04:10:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2623543 00:15:12.755 [2024-07-23 04:10:21.402726] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:14.657 00:15:14.657 real 0m12.045s 00:15:14.657 user 0m19.523s 00:15:14.657 sys 0m2.073s 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.657 ************************************ 00:15:14.657 END TEST raid_state_function_test 00:15:14.657 ************************************ 00:15:14.657 04:10:23 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:14.657 04:10:23 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:15:14.657 04:10:23 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:14.657 04:10:23 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:14.657 04:10:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:14.657 ************************************ 00:15:14.657 START TEST raid_state_function_test_sb 00:15:14.657 ************************************ 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 true 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2625879 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2625879' 00:15:14.657 Process raid pid: 2625879 00:15:14.657 04:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2625879 /var/tmp/spdk-raid.sock 00:15:14.658 04:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2625879 ']' 00:15:14.658 04:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:14.658 04:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:14.658 04:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:14.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:14.658 04:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:14.658 04:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.658 [2024-07-23 04:10:23.341946] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:14.658 [2024-07-23 04:10:23.342031] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:14.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:14.916 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:14.916 [2024-07-23 04:10:23.539436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.174 [2024-07-23 04:10:23.831296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.432 [2024-07-23 04:10:24.188720] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:15.432 [2024-07-23 04:10:24.188756] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:15.691 04:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:15.691 04:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:15:15.691 04:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:15.950 [2024-07-23 04:10:24.582350] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:15.950 [2024-07-23 04:10:24.582404] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:15.950 [2024-07-23 04:10:24.582420] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:15.950 [2024-07-23 04:10:24.582436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:15.950 04:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:15.950 04:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:15.950 04:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:15.950 04:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:15.950 04:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:15.950 04:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:15.950 04:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:15.950 04:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:15.950 04:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:15.950 04:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:15.950 04:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.950 04:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.209 04:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:16.209 "name": "Existed_Raid", 00:15:16.209 "uuid": "03fc7964-06fe-4a11-8ead-d8eabaef69d1", 00:15:16.209 "strip_size_kb": 64, 00:15:16.209 "state": "configuring", 00:15:16.209 "raid_level": "concat", 00:15:16.209 "superblock": true, 00:15:16.209 "num_base_bdevs": 2, 00:15:16.209 "num_base_bdevs_discovered": 0, 00:15:16.209 "num_base_bdevs_operational": 2, 00:15:16.209 "base_bdevs_list": [ 00:15:16.209 { 00:15:16.209 "name": "BaseBdev1", 00:15:16.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.209 "is_configured": false, 00:15:16.209 "data_offset": 0, 00:15:16.209 "data_size": 0 00:15:16.209 }, 00:15:16.209 { 00:15:16.209 "name": "BaseBdev2", 00:15:16.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.209 "is_configured": false, 00:15:16.209 "data_offset": 0, 00:15:16.209 "data_size": 0 00:15:16.209 } 00:15:16.209 ] 00:15:16.209 }' 00:15:16.209 04:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:16.209 04:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.777 04:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:17.035 [2024-07-23 04:10:25.608936] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:17.035 [2024-07-23 04:10:25.608975] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:15:17.035 04:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:17.294 [2024-07-23 04:10:25.837601] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:17.294 [2024-07-23 04:10:25.837645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:17.294 [2024-07-23 04:10:25.837659] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:17.294 [2024-07-23 04:10:25.837676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:17.294 04:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:17.554 [2024-07-23 04:10:26.123568] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:17.554 BaseBdev1 00:15:17.554 04:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:17.554 04:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:17.554 04:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:17.554 04:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:17.554 04:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:17.554 04:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:17.554 04:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:17.813 04:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:17.813 [ 00:15:17.813 { 00:15:17.813 "name": "BaseBdev1", 00:15:17.813 "aliases": [ 00:15:17.813 "e8790202-94cc-42aa-a8ab-acecb808e42d" 00:15:17.813 ], 00:15:17.813 "product_name": "Malloc disk", 00:15:17.813 "block_size": 512, 00:15:17.813 "num_blocks": 65536, 00:15:17.813 "uuid": "e8790202-94cc-42aa-a8ab-acecb808e42d", 00:15:17.813 "assigned_rate_limits": { 00:15:17.813 "rw_ios_per_sec": 0, 00:15:17.813 "rw_mbytes_per_sec": 0, 00:15:17.813 "r_mbytes_per_sec": 0, 00:15:17.813 "w_mbytes_per_sec": 0 00:15:17.813 }, 00:15:17.813 "claimed": true, 00:15:17.813 "claim_type": "exclusive_write", 00:15:17.813 "zoned": false, 00:15:17.813 "supported_io_types": { 00:15:17.813 "read": true, 00:15:17.813 "write": true, 00:15:17.813 "unmap": true, 00:15:17.813 "flush": true, 00:15:17.813 "reset": true, 00:15:17.813 "nvme_admin": false, 00:15:17.813 "nvme_io": false, 00:15:17.813 "nvme_io_md": false, 00:15:17.813 "write_zeroes": true, 00:15:17.813 "zcopy": true, 00:15:17.813 "get_zone_info": false, 00:15:17.813 "zone_management": false, 00:15:17.813 "zone_append": false, 00:15:17.813 "compare": false, 00:15:17.813 "compare_and_write": false, 00:15:17.813 "abort": true, 00:15:17.813 "seek_hole": false, 00:15:17.813 "seek_data": false, 00:15:17.813 "copy": true, 00:15:17.813 "nvme_iov_md": false 00:15:17.813 }, 00:15:17.813 "memory_domains": [ 00:15:17.813 { 00:15:17.813 "dma_device_id": "system", 00:15:17.813 "dma_device_type": 1 00:15:17.813 }, 00:15:17.813 { 00:15:17.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.813 "dma_device_type": 2 00:15:17.813 } 00:15:17.813 ], 00:15:17.813 "driver_specific": {} 00:15:17.813 } 00:15:17.813 ] 00:15:18.072 04:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:18.072 04:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:18.072 04:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:18.072 04:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:18.072 04:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:18.072 04:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:18.072 04:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:18.072 04:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:18.072 04:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:18.072 04:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:18.072 04:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:18.072 04:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.072 04:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.072 04:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:18.072 "name": "Existed_Raid", 00:15:18.072 "uuid": "2ec68008-59e6-4525-ac1e-e792ba48d5ff", 00:15:18.072 "strip_size_kb": 64, 00:15:18.072 "state": "configuring", 00:15:18.072 "raid_level": "concat", 00:15:18.072 "superblock": true, 00:15:18.072 "num_base_bdevs": 2, 00:15:18.072 "num_base_bdevs_discovered": 1, 00:15:18.072 "num_base_bdevs_operational": 2, 00:15:18.072 "base_bdevs_list": [ 00:15:18.072 { 00:15:18.072 "name": "BaseBdev1", 00:15:18.072 "uuid": "e8790202-94cc-42aa-a8ab-acecb808e42d", 00:15:18.072 "is_configured": true, 00:15:18.072 "data_offset": 2048, 00:15:18.072 "data_size": 63488 00:15:18.072 }, 00:15:18.072 { 00:15:18.072 "name": "BaseBdev2", 00:15:18.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.072 "is_configured": false, 00:15:18.072 "data_offset": 0, 00:15:18.072 "data_size": 0 00:15:18.072 } 00:15:18.072 ] 00:15:18.072 }' 00:15:18.072 04:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:18.072 04:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.682 04:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:18.952 [2024-07-23 04:10:27.507345] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:18.952 [2024-07-23 04:10:27.507399] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:15:18.952 04:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:18.952 [2024-07-23 04:10:27.687924] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:18.952 [2024-07-23 04:10:27.690224] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:18.952 [2024-07-23 04:10:27.690266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:18.952 04:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:18.952 04:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:18.952 04:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:18.952 04:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:18.952 04:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:18.952 04:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:18.952 04:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:18.952 04:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:18.952 04:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:18.952 04:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:18.952 04:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:18.952 04:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:18.952 04:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.952 04:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.211 04:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:19.211 "name": "Existed_Raid", 00:15:19.211 "uuid": "a2169648-9ca9-4988-b3a2-2792afcf180c", 00:15:19.211 "strip_size_kb": 64, 00:15:19.211 "state": "configuring", 00:15:19.211 "raid_level": "concat", 00:15:19.211 "superblock": true, 00:15:19.211 "num_base_bdevs": 2, 00:15:19.211 "num_base_bdevs_discovered": 1, 00:15:19.211 "num_base_bdevs_operational": 2, 00:15:19.211 "base_bdevs_list": [ 00:15:19.211 { 00:15:19.211 "name": "BaseBdev1", 00:15:19.211 "uuid": "e8790202-94cc-42aa-a8ab-acecb808e42d", 00:15:19.211 "is_configured": true, 00:15:19.211 "data_offset": 2048, 00:15:19.211 "data_size": 63488 00:15:19.211 }, 00:15:19.211 { 00:15:19.211 "name": "BaseBdev2", 00:15:19.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.211 "is_configured": false, 00:15:19.211 "data_offset": 0, 00:15:19.211 "data_size": 0 00:15:19.211 } 00:15:19.211 ] 00:15:19.211 }' 00:15:19.211 04:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:19.211 04:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.779 04:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:20.038 [2024-07-23 04:10:28.778921] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:20.038 [2024-07-23 04:10:28.779197] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:15:20.038 [2024-07-23 04:10:28.779221] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:20.038 [2024-07-23 04:10:28.779543] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:15:20.038 [2024-07-23 04:10:28.779769] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:15:20.038 [2024-07-23 04:10:28.779787] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:15:20.038 [2024-07-23 04:10:28.779968] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.038 BaseBdev2 00:15:20.038 04:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:20.038 04:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:20.038 04:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:20.038 04:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:20.038 04:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:20.038 04:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:20.038 04:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:20.297 04:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:20.556 [ 00:15:20.556 { 00:15:20.556 "name": "BaseBdev2", 00:15:20.556 "aliases": [ 00:15:20.556 "94f7ed2e-f22b-4829-89d2-601a1a43cfa6" 00:15:20.556 ], 00:15:20.556 "product_name": "Malloc disk", 00:15:20.556 "block_size": 512, 00:15:20.556 "num_blocks": 65536, 00:15:20.556 "uuid": "94f7ed2e-f22b-4829-89d2-601a1a43cfa6", 00:15:20.556 "assigned_rate_limits": { 00:15:20.556 "rw_ios_per_sec": 0, 00:15:20.556 "rw_mbytes_per_sec": 0, 00:15:20.556 "r_mbytes_per_sec": 0, 00:15:20.556 "w_mbytes_per_sec": 0 00:15:20.556 }, 00:15:20.556 "claimed": true, 00:15:20.556 "claim_type": "exclusive_write", 00:15:20.556 "zoned": false, 00:15:20.556 "supported_io_types": { 00:15:20.556 "read": true, 00:15:20.556 "write": true, 00:15:20.556 "unmap": true, 00:15:20.556 "flush": true, 00:15:20.556 "reset": true, 00:15:20.556 "nvme_admin": false, 00:15:20.556 "nvme_io": false, 00:15:20.556 "nvme_io_md": false, 00:15:20.556 "write_zeroes": true, 00:15:20.556 "zcopy": true, 00:15:20.556 "get_zone_info": false, 00:15:20.556 "zone_management": false, 00:15:20.556 "zone_append": false, 00:15:20.556 "compare": false, 00:15:20.556 "compare_and_write": false, 00:15:20.556 "abort": true, 00:15:20.556 "seek_hole": false, 00:15:20.556 "seek_data": false, 00:15:20.556 "copy": true, 00:15:20.556 "nvme_iov_md": false 00:15:20.556 }, 00:15:20.556 "memory_domains": [ 00:15:20.556 { 00:15:20.556 "dma_device_id": "system", 00:15:20.556 "dma_device_type": 1 00:15:20.556 }, 00:15:20.556 { 00:15:20.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.556 "dma_device_type": 2 00:15:20.556 } 00:15:20.556 ], 00:15:20.556 "driver_specific": {} 00:15:20.556 } 00:15:20.556 ] 00:15:20.556 04:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:20.556 04:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:20.556 04:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:20.556 04:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:20.556 04:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:20.556 04:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:20.556 04:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:20.556 04:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:20.556 04:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:20.556 04:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:20.556 04:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:20.556 04:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:20.556 04:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:20.556 04:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.556 04:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.815 04:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:20.815 "name": "Existed_Raid", 00:15:20.815 "uuid": "a2169648-9ca9-4988-b3a2-2792afcf180c", 00:15:20.815 "strip_size_kb": 64, 00:15:20.815 "state": "online", 00:15:20.815 "raid_level": "concat", 00:15:20.815 "superblock": true, 00:15:20.815 "num_base_bdevs": 2, 00:15:20.815 "num_base_bdevs_discovered": 2, 00:15:20.815 "num_base_bdevs_operational": 2, 00:15:20.815 "base_bdevs_list": [ 00:15:20.815 { 00:15:20.815 "name": "BaseBdev1", 00:15:20.815 "uuid": "e8790202-94cc-42aa-a8ab-acecb808e42d", 00:15:20.815 "is_configured": true, 00:15:20.815 "data_offset": 2048, 00:15:20.815 "data_size": 63488 00:15:20.815 }, 00:15:20.815 { 00:15:20.815 "name": "BaseBdev2", 00:15:20.815 "uuid": "94f7ed2e-f22b-4829-89d2-601a1a43cfa6", 00:15:20.815 "is_configured": true, 00:15:20.815 "data_offset": 2048, 00:15:20.815 "data_size": 63488 00:15:20.815 } 00:15:20.815 ] 00:15:20.815 }' 00:15:20.815 04:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:20.815 04:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.383 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:21.383 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:21.383 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:21.383 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:21.383 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:21.383 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:21.383 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:21.383 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:21.642 [2024-07-23 04:10:30.227250] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:21.642 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:21.642 "name": "Existed_Raid", 00:15:21.642 "aliases": [ 00:15:21.642 "a2169648-9ca9-4988-b3a2-2792afcf180c" 00:15:21.642 ], 00:15:21.642 "product_name": "Raid Volume", 00:15:21.642 "block_size": 512, 00:15:21.642 "num_blocks": 126976, 00:15:21.642 "uuid": "a2169648-9ca9-4988-b3a2-2792afcf180c", 00:15:21.642 "assigned_rate_limits": { 00:15:21.642 "rw_ios_per_sec": 0, 00:15:21.642 "rw_mbytes_per_sec": 0, 00:15:21.642 "r_mbytes_per_sec": 0, 00:15:21.642 "w_mbytes_per_sec": 0 00:15:21.642 }, 00:15:21.642 "claimed": false, 00:15:21.642 "zoned": false, 00:15:21.642 "supported_io_types": { 00:15:21.642 "read": true, 00:15:21.642 "write": true, 00:15:21.642 "unmap": true, 00:15:21.642 "flush": true, 00:15:21.642 "reset": true, 00:15:21.642 "nvme_admin": false, 00:15:21.642 "nvme_io": false, 00:15:21.642 "nvme_io_md": false, 00:15:21.642 "write_zeroes": true, 00:15:21.642 "zcopy": false, 00:15:21.642 "get_zone_info": false, 00:15:21.642 "zone_management": false, 00:15:21.642 "zone_append": false, 00:15:21.642 "compare": false, 00:15:21.642 "compare_and_write": false, 00:15:21.642 "abort": false, 00:15:21.642 "seek_hole": false, 00:15:21.642 "seek_data": false, 00:15:21.642 "copy": false, 00:15:21.642 "nvme_iov_md": false 00:15:21.642 }, 00:15:21.642 "memory_domains": [ 00:15:21.642 { 00:15:21.642 "dma_device_id": "system", 00:15:21.642 "dma_device_type": 1 00:15:21.642 }, 00:15:21.642 { 00:15:21.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.642 "dma_device_type": 2 00:15:21.642 }, 00:15:21.642 { 00:15:21.642 "dma_device_id": "system", 00:15:21.642 "dma_device_type": 1 00:15:21.642 }, 00:15:21.642 { 00:15:21.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.642 "dma_device_type": 2 00:15:21.642 } 00:15:21.642 ], 00:15:21.642 "driver_specific": { 00:15:21.642 "raid": { 00:15:21.642 "uuid": "a2169648-9ca9-4988-b3a2-2792afcf180c", 00:15:21.643 "strip_size_kb": 64, 00:15:21.643 "state": "online", 00:15:21.643 "raid_level": "concat", 00:15:21.643 "superblock": true, 00:15:21.643 "num_base_bdevs": 2, 00:15:21.643 "num_base_bdevs_discovered": 2, 00:15:21.643 "num_base_bdevs_operational": 2, 00:15:21.643 "base_bdevs_list": [ 00:15:21.643 { 00:15:21.643 "name": "BaseBdev1", 00:15:21.643 "uuid": "e8790202-94cc-42aa-a8ab-acecb808e42d", 00:15:21.643 "is_configured": true, 00:15:21.643 "data_offset": 2048, 00:15:21.643 "data_size": 63488 00:15:21.643 }, 00:15:21.643 { 00:15:21.643 "name": "BaseBdev2", 00:15:21.643 "uuid": "94f7ed2e-f22b-4829-89d2-601a1a43cfa6", 00:15:21.643 "is_configured": true, 00:15:21.643 "data_offset": 2048, 00:15:21.643 "data_size": 63488 00:15:21.643 } 00:15:21.643 ] 00:15:21.643 } 00:15:21.643 } 00:15:21.643 }' 00:15:21.643 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:21.643 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:21.643 BaseBdev2' 00:15:21.643 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:21.643 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:21.643 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:21.902 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:21.902 "name": "BaseBdev1", 00:15:21.902 "aliases": [ 00:15:21.902 "e8790202-94cc-42aa-a8ab-acecb808e42d" 00:15:21.902 ], 00:15:21.902 "product_name": "Malloc disk", 00:15:21.902 "block_size": 512, 00:15:21.902 "num_blocks": 65536, 00:15:21.902 "uuid": "e8790202-94cc-42aa-a8ab-acecb808e42d", 00:15:21.902 "assigned_rate_limits": { 00:15:21.902 "rw_ios_per_sec": 0, 00:15:21.902 "rw_mbytes_per_sec": 0, 00:15:21.902 "r_mbytes_per_sec": 0, 00:15:21.902 "w_mbytes_per_sec": 0 00:15:21.902 }, 00:15:21.902 "claimed": true, 00:15:21.902 "claim_type": "exclusive_write", 00:15:21.902 "zoned": false, 00:15:21.902 "supported_io_types": { 00:15:21.902 "read": true, 00:15:21.902 "write": true, 00:15:21.902 "unmap": true, 00:15:21.902 "flush": true, 00:15:21.902 "reset": true, 00:15:21.902 "nvme_admin": false, 00:15:21.902 "nvme_io": false, 00:15:21.902 "nvme_io_md": false, 00:15:21.902 "write_zeroes": true, 00:15:21.902 "zcopy": true, 00:15:21.902 "get_zone_info": false, 00:15:21.902 "zone_management": false, 00:15:21.902 "zone_append": false, 00:15:21.902 "compare": false, 00:15:21.902 "compare_and_write": false, 00:15:21.902 "abort": true, 00:15:21.902 "seek_hole": false, 00:15:21.902 "seek_data": false, 00:15:21.902 "copy": true, 00:15:21.902 "nvme_iov_md": false 00:15:21.902 }, 00:15:21.902 "memory_domains": [ 00:15:21.902 { 00:15:21.902 "dma_device_id": "system", 00:15:21.902 "dma_device_type": 1 00:15:21.902 }, 00:15:21.902 { 00:15:21.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.902 "dma_device_type": 2 00:15:21.902 } 00:15:21.902 ], 00:15:21.902 "driver_specific": {} 00:15:21.902 }' 00:15:21.902 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:21.902 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:21.902 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:21.902 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:21.902 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:22.161 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:22.161 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:22.161 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:22.161 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:22.161 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:22.161 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:22.161 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:22.161 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:22.161 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:22.161 04:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:22.420 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:22.420 "name": "BaseBdev2", 00:15:22.420 "aliases": [ 00:15:22.420 "94f7ed2e-f22b-4829-89d2-601a1a43cfa6" 00:15:22.420 ], 00:15:22.420 "product_name": "Malloc disk", 00:15:22.420 "block_size": 512, 00:15:22.420 "num_blocks": 65536, 00:15:22.420 "uuid": "94f7ed2e-f22b-4829-89d2-601a1a43cfa6", 00:15:22.420 "assigned_rate_limits": { 00:15:22.420 "rw_ios_per_sec": 0, 00:15:22.420 "rw_mbytes_per_sec": 0, 00:15:22.420 "r_mbytes_per_sec": 0, 00:15:22.420 "w_mbytes_per_sec": 0 00:15:22.420 }, 00:15:22.420 "claimed": true, 00:15:22.420 "claim_type": "exclusive_write", 00:15:22.420 "zoned": false, 00:15:22.420 "supported_io_types": { 00:15:22.420 "read": true, 00:15:22.420 "write": true, 00:15:22.420 "unmap": true, 00:15:22.420 "flush": true, 00:15:22.420 "reset": true, 00:15:22.420 "nvme_admin": false, 00:15:22.420 "nvme_io": false, 00:15:22.420 "nvme_io_md": false, 00:15:22.420 "write_zeroes": true, 00:15:22.420 "zcopy": true, 00:15:22.420 "get_zone_info": false, 00:15:22.420 "zone_management": false, 00:15:22.420 "zone_append": false, 00:15:22.420 "compare": false, 00:15:22.420 "compare_and_write": false, 00:15:22.420 "abort": true, 00:15:22.420 "seek_hole": false, 00:15:22.420 "seek_data": false, 00:15:22.420 "copy": true, 00:15:22.420 "nvme_iov_md": false 00:15:22.420 }, 00:15:22.420 "memory_domains": [ 00:15:22.420 { 00:15:22.420 "dma_device_id": "system", 00:15:22.420 "dma_device_type": 1 00:15:22.420 }, 00:15:22.420 { 00:15:22.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.420 "dma_device_type": 2 00:15:22.420 } 00:15:22.420 ], 00:15:22.420 "driver_specific": {} 00:15:22.420 }' 00:15:22.420 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:22.420 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:22.420 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:22.420 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:22.678 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:22.678 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:22.678 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:22.678 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:22.678 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:22.678 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:22.678 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:22.678 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:22.678 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:22.937 [2024-07-23 04:10:31.638861] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:22.937 [2024-07-23 04:10:31.638897] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.937 [2024-07-23 04:10:31.638956] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.937 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:22.937 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:22.937 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:22.937 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:22.937 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:22.937 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:22.937 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:22.937 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:22.937 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:22.937 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:22.937 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:22.937 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:22.937 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:22.937 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:22.937 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:22.937 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.937 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.196 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:23.196 "name": "Existed_Raid", 00:15:23.196 "uuid": "a2169648-9ca9-4988-b3a2-2792afcf180c", 00:15:23.196 "strip_size_kb": 64, 00:15:23.196 "state": "offline", 00:15:23.196 "raid_level": "concat", 00:15:23.196 "superblock": true, 00:15:23.196 "num_base_bdevs": 2, 00:15:23.196 "num_base_bdevs_discovered": 1, 00:15:23.196 "num_base_bdevs_operational": 1, 00:15:23.196 "base_bdevs_list": [ 00:15:23.196 { 00:15:23.196 "name": null, 00:15:23.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.196 "is_configured": false, 00:15:23.196 "data_offset": 2048, 00:15:23.196 "data_size": 63488 00:15:23.196 }, 00:15:23.196 { 00:15:23.196 "name": "BaseBdev2", 00:15:23.196 "uuid": "94f7ed2e-f22b-4829-89d2-601a1a43cfa6", 00:15:23.196 "is_configured": true, 00:15:23.196 "data_offset": 2048, 00:15:23.196 "data_size": 63488 00:15:23.196 } 00:15:23.196 ] 00:15:23.196 }' 00:15:23.196 04:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:23.196 04:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.764 04:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:23.764 04:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:23.764 04:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.764 04:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:24.023 04:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:24.023 04:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:24.023 04:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:24.282 [2024-07-23 04:10:32.872440] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:24.282 [2024-07-23 04:10:32.872494] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:15:24.282 04:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:24.282 04:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:24.282 04:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.282 04:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:24.541 04:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:24.541 04:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:24.541 04:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:24.541 04:10:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2625879 00:15:24.542 04:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2625879 ']' 00:15:24.542 04:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2625879 00:15:24.542 04:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:15:24.542 04:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:24.542 04:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2625879 00:15:24.542 04:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:24.542 04:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:24.542 04:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2625879' 00:15:24.542 killing process with pid 2625879 00:15:24.542 04:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2625879 00:15:24.542 [2024-07-23 04:10:33.305949] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:24.542 04:10:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2625879 00:15:24.800 [2024-07-23 04:10:33.330000] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:26.704 04:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:26.704 00:15:26.704 real 0m11.729s 00:15:26.704 user 0m19.154s 00:15:26.704 sys 0m2.047s 00:15:26.704 04:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:26.704 04:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.704 ************************************ 00:15:26.704 END TEST raid_state_function_test_sb 00:15:26.704 ************************************ 00:15:26.704 04:10:35 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:26.704 04:10:35 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:15:26.704 04:10:35 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:26.704 04:10:35 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:26.704 04:10:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:26.704 ************************************ 00:15:26.704 START TEST raid_superblock_test 00:15:26.704 ************************************ 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 2 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2627988 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2627988 /var/tmp/spdk-raid.sock 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2627988 ']' 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:26.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:26.704 04:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.704 [2024-07-23 04:10:35.165263] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:26.704 [2024-07-23 04:10:35.165381] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2627988 ] 00:15:26.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.704 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:26.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.704 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:26.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.704 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:26.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.704 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:26.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.704 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:26.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.704 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:26.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.704 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:26.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.704 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:26.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.704 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:26.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.704 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:26.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.704 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:26.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.704 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:26.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.704 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:26.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.704 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:26.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.704 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:26.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.704 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:26.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.704 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:26.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.704 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:26.705 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.705 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:26.705 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.705 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:26.705 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.705 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:26.705 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.705 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:26.705 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.705 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:26.705 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.705 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:26.705 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.705 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:26.705 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.705 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:26.705 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.705 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:26.705 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.705 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:26.705 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.705 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:26.705 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.705 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:26.705 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.705 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:26.705 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.705 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:26.705 [2024-07-23 04:10:35.390371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.964 [2024-07-23 04:10:35.678268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.531 [2024-07-23 04:10:36.023030] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:27.531 [2024-07-23 04:10:36.023075] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:27.531 04:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.532 04:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:15:27.532 04:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:27.532 04:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:27.532 04:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:27.532 04:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:27.532 04:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:27.532 04:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:27.532 04:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:27.532 04:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:27.532 04:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:27.790 malloc1 00:15:27.790 04:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:28.049 [2024-07-23 04:10:36.709078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:28.049 [2024-07-23 04:10:36.709150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.049 [2024-07-23 04:10:36.709181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:15:28.049 [2024-07-23 04:10:36.709198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.049 [2024-07-23 04:10:36.711968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.049 [2024-07-23 04:10:36.712005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:28.049 pt1 00:15:28.049 04:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:28.049 04:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:28.049 04:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:28.049 04:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:28.049 04:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:28.049 04:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:28.049 04:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:28.049 04:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:28.049 04:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:28.307 malloc2 00:15:28.307 04:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:28.566 [2024-07-23 04:10:37.224165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:28.566 [2024-07-23 04:10:37.224223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.566 [2024-07-23 04:10:37.224250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:15:28.566 [2024-07-23 04:10:37.224265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.566 [2024-07-23 04:10:37.227027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.566 [2024-07-23 04:10:37.227068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:28.566 pt2 00:15:28.566 04:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:28.566 04:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:28.566 04:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:15:28.824 [2024-07-23 04:10:37.452805] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:28.824 [2024-07-23 04:10:37.455176] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:28.824 [2024-07-23 04:10:37.455402] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040880 00:15:28.824 [2024-07-23 04:10:37.455423] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:28.824 [2024-07-23 04:10:37.455758] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:15:28.824 [2024-07-23 04:10:37.456000] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040880 00:15:28.824 [2024-07-23 04:10:37.456022] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040880 00:15:28.824 [2024-07-23 04:10:37.456226] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.824 04:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:28.824 04:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:28.824 04:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:28.824 04:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:28.824 04:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:28.824 04:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:28.824 04:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:28.824 04:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:28.824 04:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:28.825 04:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:28.825 04:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.825 04:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.083 04:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:29.083 "name": "raid_bdev1", 00:15:29.083 "uuid": "e3d2e152-9fbe-4fe4-a492-66f1446e3375", 00:15:29.083 "strip_size_kb": 64, 00:15:29.083 "state": "online", 00:15:29.083 "raid_level": "concat", 00:15:29.083 "superblock": true, 00:15:29.083 "num_base_bdevs": 2, 00:15:29.083 "num_base_bdevs_discovered": 2, 00:15:29.083 "num_base_bdevs_operational": 2, 00:15:29.083 "base_bdevs_list": [ 00:15:29.083 { 00:15:29.083 "name": "pt1", 00:15:29.083 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:29.083 "is_configured": true, 00:15:29.083 "data_offset": 2048, 00:15:29.083 "data_size": 63488 00:15:29.083 }, 00:15:29.083 { 00:15:29.083 "name": "pt2", 00:15:29.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:29.083 "is_configured": true, 00:15:29.083 "data_offset": 2048, 00:15:29.083 "data_size": 63488 00:15:29.083 } 00:15:29.083 ] 00:15:29.083 }' 00:15:29.083 04:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:29.083 04:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.649 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:29.649 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:29.649 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:29.649 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:29.649 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:29.649 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:29.649 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:29.649 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:29.649 [2024-07-23 04:10:38.403631] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:29.649 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:29.649 "name": "raid_bdev1", 00:15:29.649 "aliases": [ 00:15:29.649 "e3d2e152-9fbe-4fe4-a492-66f1446e3375" 00:15:29.649 ], 00:15:29.649 "product_name": "Raid Volume", 00:15:29.649 "block_size": 512, 00:15:29.649 "num_blocks": 126976, 00:15:29.649 "uuid": "e3d2e152-9fbe-4fe4-a492-66f1446e3375", 00:15:29.649 "assigned_rate_limits": { 00:15:29.649 "rw_ios_per_sec": 0, 00:15:29.649 "rw_mbytes_per_sec": 0, 00:15:29.649 "r_mbytes_per_sec": 0, 00:15:29.649 "w_mbytes_per_sec": 0 00:15:29.649 }, 00:15:29.649 "claimed": false, 00:15:29.649 "zoned": false, 00:15:29.649 "supported_io_types": { 00:15:29.649 "read": true, 00:15:29.649 "write": true, 00:15:29.649 "unmap": true, 00:15:29.649 "flush": true, 00:15:29.649 "reset": true, 00:15:29.649 "nvme_admin": false, 00:15:29.649 "nvme_io": false, 00:15:29.649 "nvme_io_md": false, 00:15:29.649 "write_zeroes": true, 00:15:29.649 "zcopy": false, 00:15:29.649 "get_zone_info": false, 00:15:29.649 "zone_management": false, 00:15:29.649 "zone_append": false, 00:15:29.649 "compare": false, 00:15:29.649 "compare_and_write": false, 00:15:29.649 "abort": false, 00:15:29.649 "seek_hole": false, 00:15:29.649 "seek_data": false, 00:15:29.649 "copy": false, 00:15:29.649 "nvme_iov_md": false 00:15:29.649 }, 00:15:29.649 "memory_domains": [ 00:15:29.649 { 00:15:29.649 "dma_device_id": "system", 00:15:29.649 "dma_device_type": 1 00:15:29.649 }, 00:15:29.649 { 00:15:29.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.649 "dma_device_type": 2 00:15:29.649 }, 00:15:29.649 { 00:15:29.649 "dma_device_id": "system", 00:15:29.649 "dma_device_type": 1 00:15:29.649 }, 00:15:29.649 { 00:15:29.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.649 "dma_device_type": 2 00:15:29.649 } 00:15:29.649 ], 00:15:29.649 "driver_specific": { 00:15:29.649 "raid": { 00:15:29.649 "uuid": "e3d2e152-9fbe-4fe4-a492-66f1446e3375", 00:15:29.649 "strip_size_kb": 64, 00:15:29.649 "state": "online", 00:15:29.650 "raid_level": "concat", 00:15:29.650 "superblock": true, 00:15:29.650 "num_base_bdevs": 2, 00:15:29.650 "num_base_bdevs_discovered": 2, 00:15:29.650 "num_base_bdevs_operational": 2, 00:15:29.650 "base_bdevs_list": [ 00:15:29.650 { 00:15:29.650 "name": "pt1", 00:15:29.650 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:29.650 "is_configured": true, 00:15:29.650 "data_offset": 2048, 00:15:29.650 "data_size": 63488 00:15:29.650 }, 00:15:29.650 { 00:15:29.650 "name": "pt2", 00:15:29.650 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:29.650 "is_configured": true, 00:15:29.650 "data_offset": 2048, 00:15:29.650 "data_size": 63488 00:15:29.650 } 00:15:29.650 ] 00:15:29.650 } 00:15:29.650 } 00:15:29.650 }' 00:15:29.650 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:29.908 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:29.908 pt2' 00:15:29.908 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:29.908 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:29.908 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:30.166 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:30.166 "name": "pt1", 00:15:30.166 "aliases": [ 00:15:30.166 "00000000-0000-0000-0000-000000000001" 00:15:30.166 ], 00:15:30.166 "product_name": "passthru", 00:15:30.166 "block_size": 512, 00:15:30.166 "num_blocks": 65536, 00:15:30.166 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:30.166 "assigned_rate_limits": { 00:15:30.166 "rw_ios_per_sec": 0, 00:15:30.166 "rw_mbytes_per_sec": 0, 00:15:30.166 "r_mbytes_per_sec": 0, 00:15:30.166 "w_mbytes_per_sec": 0 00:15:30.166 }, 00:15:30.166 "claimed": true, 00:15:30.166 "claim_type": "exclusive_write", 00:15:30.166 "zoned": false, 00:15:30.166 "supported_io_types": { 00:15:30.166 "read": true, 00:15:30.166 "write": true, 00:15:30.166 "unmap": true, 00:15:30.166 "flush": true, 00:15:30.166 "reset": true, 00:15:30.166 "nvme_admin": false, 00:15:30.166 "nvme_io": false, 00:15:30.166 "nvme_io_md": false, 00:15:30.166 "write_zeroes": true, 00:15:30.166 "zcopy": true, 00:15:30.166 "get_zone_info": false, 00:15:30.166 "zone_management": false, 00:15:30.166 "zone_append": false, 00:15:30.166 "compare": false, 00:15:30.166 "compare_and_write": false, 00:15:30.166 "abort": true, 00:15:30.166 "seek_hole": false, 00:15:30.166 "seek_data": false, 00:15:30.166 "copy": true, 00:15:30.166 "nvme_iov_md": false 00:15:30.166 }, 00:15:30.166 "memory_domains": [ 00:15:30.166 { 00:15:30.166 "dma_device_id": "system", 00:15:30.166 "dma_device_type": 1 00:15:30.166 }, 00:15:30.166 { 00:15:30.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.166 "dma_device_type": 2 00:15:30.166 } 00:15:30.166 ], 00:15:30.166 "driver_specific": { 00:15:30.166 "passthru": { 00:15:30.166 "name": "pt1", 00:15:30.166 "base_bdev_name": "malloc1" 00:15:30.166 } 00:15:30.166 } 00:15:30.166 }' 00:15:30.166 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:30.166 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:30.166 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:30.166 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:30.166 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:30.166 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:30.166 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:30.166 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:30.425 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:30.425 04:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:30.425 04:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:30.425 04:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:30.425 04:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:30.425 04:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:30.425 04:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:30.684 04:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:30.684 "name": "pt2", 00:15:30.684 "aliases": [ 00:15:30.684 "00000000-0000-0000-0000-000000000002" 00:15:30.684 ], 00:15:30.684 "product_name": "passthru", 00:15:30.684 "block_size": 512, 00:15:30.684 "num_blocks": 65536, 00:15:30.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:30.684 "assigned_rate_limits": { 00:15:30.684 "rw_ios_per_sec": 0, 00:15:30.684 "rw_mbytes_per_sec": 0, 00:15:30.684 "r_mbytes_per_sec": 0, 00:15:30.684 "w_mbytes_per_sec": 0 00:15:30.684 }, 00:15:30.684 "claimed": true, 00:15:30.684 "claim_type": "exclusive_write", 00:15:30.684 "zoned": false, 00:15:30.684 "supported_io_types": { 00:15:30.684 "read": true, 00:15:30.684 "write": true, 00:15:30.684 "unmap": true, 00:15:30.684 "flush": true, 00:15:30.684 "reset": true, 00:15:30.684 "nvme_admin": false, 00:15:30.684 "nvme_io": false, 00:15:30.684 "nvme_io_md": false, 00:15:30.684 "write_zeroes": true, 00:15:30.684 "zcopy": true, 00:15:30.684 "get_zone_info": false, 00:15:30.684 "zone_management": false, 00:15:30.684 "zone_append": false, 00:15:30.684 "compare": false, 00:15:30.684 "compare_and_write": false, 00:15:30.684 "abort": true, 00:15:30.684 "seek_hole": false, 00:15:30.684 "seek_data": false, 00:15:30.684 "copy": true, 00:15:30.684 "nvme_iov_md": false 00:15:30.684 }, 00:15:30.684 "memory_domains": [ 00:15:30.684 { 00:15:30.684 "dma_device_id": "system", 00:15:30.684 "dma_device_type": 1 00:15:30.684 }, 00:15:30.684 { 00:15:30.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.684 "dma_device_type": 2 00:15:30.684 } 00:15:30.684 ], 00:15:30.684 "driver_specific": { 00:15:30.684 "passthru": { 00:15:30.684 "name": "pt2", 00:15:30.684 "base_bdev_name": "malloc2" 00:15:30.684 } 00:15:30.684 } 00:15:30.684 }' 00:15:30.684 04:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:30.684 04:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:30.684 04:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:30.684 04:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:30.684 04:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:30.684 04:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:30.684 04:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:30.684 04:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:30.943 04:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:30.943 04:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:30.943 04:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:30.943 04:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:30.943 04:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:30.943 04:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:31.201 [2024-07-23 04:10:39.759512] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.201 04:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=e3d2e152-9fbe-4fe4-a492-66f1446e3375 00:15:31.201 04:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z e3d2e152-9fbe-4fe4-a492-66f1446e3375 ']' 00:15:31.201 04:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:31.493 [2024-07-23 04:10:39.987789] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:31.493 [2024-07-23 04:10:39.987821] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.493 [2024-07-23 04:10:39.987909] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.493 [2024-07-23 04:10:39.987971] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.493 [2024-07-23 04:10:39.987995] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040880 name raid_bdev1, state offline 00:15:31.493 04:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.493 04:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:31.493 04:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:31.493 04:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:31.493 04:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:31.493 04:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:31.752 04:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:31.752 04:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:32.010 04:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:32.010 04:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:32.270 04:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:32.270 04:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:32.270 04:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:15:32.270 04:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:32.270 04:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:32.270 04:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:32.270 04:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:32.270 04:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:32.270 04:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:32.270 04:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:32.270 04:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:32.270 04:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:15:32.270 04:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:32.529 [2024-07-23 04:10:41.106786] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:32.529 [2024-07-23 04:10:41.109114] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:32.529 [2024-07-23 04:10:41.109197] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:32.529 [2024-07-23 04:10:41.109253] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:32.529 [2024-07-23 04:10:41.109281] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:32.529 [2024-07-23 04:10:41.109298] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state configuring 00:15:32.529 request: 00:15:32.529 { 00:15:32.529 "name": "raid_bdev1", 00:15:32.529 "raid_level": "concat", 00:15:32.529 "base_bdevs": [ 00:15:32.529 "malloc1", 00:15:32.529 "malloc2" 00:15:32.529 ], 00:15:32.529 "strip_size_kb": 64, 00:15:32.529 "superblock": false, 00:15:32.529 "method": "bdev_raid_create", 00:15:32.529 "req_id": 1 00:15:32.529 } 00:15:32.529 Got JSON-RPC error response 00:15:32.529 response: 00:15:32.529 { 00:15:32.529 "code": -17, 00:15:32.529 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:32.529 } 00:15:32.529 04:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:15:32.529 04:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:32.529 04:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:32.529 04:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:32.529 04:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.529 04:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:32.529 04:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:32.529 04:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:32.529 04:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:32.788 [2024-07-23 04:10:41.507775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:32.788 [2024-07-23 04:10:41.507835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.788 [2024-07-23 04:10:41.507862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:15:32.788 [2024-07-23 04:10:41.507879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.788 [2024-07-23 04:10:41.510775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.788 [2024-07-23 04:10:41.510815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:32.788 [2024-07-23 04:10:41.510907] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:32.788 [2024-07-23 04:10:41.510998] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:32.788 pt1 00:15:32.788 04:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:15:32.788 04:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:32.788 04:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:32.788 04:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:32.788 04:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:32.788 04:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:32.788 04:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:32.788 04:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:32.788 04:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:32.788 04:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:32.788 04:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.788 04:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.047 04:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:33.047 "name": "raid_bdev1", 00:15:33.047 "uuid": "e3d2e152-9fbe-4fe4-a492-66f1446e3375", 00:15:33.047 "strip_size_kb": 64, 00:15:33.047 "state": "configuring", 00:15:33.047 "raid_level": "concat", 00:15:33.047 "superblock": true, 00:15:33.047 "num_base_bdevs": 2, 00:15:33.047 "num_base_bdevs_discovered": 1, 00:15:33.047 "num_base_bdevs_operational": 2, 00:15:33.047 "base_bdevs_list": [ 00:15:33.047 { 00:15:33.047 "name": "pt1", 00:15:33.047 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:33.047 "is_configured": true, 00:15:33.047 "data_offset": 2048, 00:15:33.047 "data_size": 63488 00:15:33.047 }, 00:15:33.047 { 00:15:33.047 "name": null, 00:15:33.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:33.047 "is_configured": false, 00:15:33.047 "data_offset": 2048, 00:15:33.047 "data_size": 63488 00:15:33.047 } 00:15:33.047 ] 00:15:33.047 }' 00:15:33.047 04:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:33.048 04:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.615 04:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:15:33.615 04:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:33.615 04:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:33.616 04:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:33.875 [2024-07-23 04:10:42.510497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:33.875 [2024-07-23 04:10:42.510566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.875 [2024-07-23 04:10:42.510592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041d80 00:15:33.875 [2024-07-23 04:10:42.510610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.875 [2024-07-23 04:10:42.511222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.875 [2024-07-23 04:10:42.511265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:33.875 [2024-07-23 04:10:42.511384] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:33.875 [2024-07-23 04:10:42.511425] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:33.875 [2024-07-23 04:10:42.511598] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:15:33.875 [2024-07-23 04:10:42.511616] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:33.875 [2024-07-23 04:10:42.511914] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:15:33.875 [2024-07-23 04:10:42.512137] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:15:33.875 [2024-07-23 04:10:42.512160] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:15:33.875 [2024-07-23 04:10:42.512364] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.875 pt2 00:15:33.875 04:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:33.875 04:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:33.875 04:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:33.875 04:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:33.875 04:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:33.875 04:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:33.875 04:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:33.875 04:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:33.875 04:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:33.875 04:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:33.875 04:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:33.875 04:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:33.875 04:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.875 04:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.134 04:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:34.134 "name": "raid_bdev1", 00:15:34.134 "uuid": "e3d2e152-9fbe-4fe4-a492-66f1446e3375", 00:15:34.134 "strip_size_kb": 64, 00:15:34.134 "state": "online", 00:15:34.134 "raid_level": "concat", 00:15:34.134 "superblock": true, 00:15:34.134 "num_base_bdevs": 2, 00:15:34.134 "num_base_bdevs_discovered": 2, 00:15:34.134 "num_base_bdevs_operational": 2, 00:15:34.134 "base_bdevs_list": [ 00:15:34.134 { 00:15:34.134 "name": "pt1", 00:15:34.134 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:34.134 "is_configured": true, 00:15:34.134 "data_offset": 2048, 00:15:34.134 "data_size": 63488 00:15:34.134 }, 00:15:34.134 { 00:15:34.134 "name": "pt2", 00:15:34.134 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.134 "is_configured": true, 00:15:34.134 "data_offset": 2048, 00:15:34.134 "data_size": 63488 00:15:34.134 } 00:15:34.134 ] 00:15:34.134 }' 00:15:34.134 04:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:34.134 04:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.737 04:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:34.737 04:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:34.737 04:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:34.737 04:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:34.738 04:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:34.738 04:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:34.738 04:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:34.738 04:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:34.738 [2024-07-23 04:10:43.517563] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.998 04:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:34.998 "name": "raid_bdev1", 00:15:34.998 "aliases": [ 00:15:34.998 "e3d2e152-9fbe-4fe4-a492-66f1446e3375" 00:15:34.998 ], 00:15:34.998 "product_name": "Raid Volume", 00:15:34.998 "block_size": 512, 00:15:34.998 "num_blocks": 126976, 00:15:34.998 "uuid": "e3d2e152-9fbe-4fe4-a492-66f1446e3375", 00:15:34.998 "assigned_rate_limits": { 00:15:34.998 "rw_ios_per_sec": 0, 00:15:34.998 "rw_mbytes_per_sec": 0, 00:15:34.998 "r_mbytes_per_sec": 0, 00:15:34.998 "w_mbytes_per_sec": 0 00:15:34.998 }, 00:15:34.998 "claimed": false, 00:15:34.998 "zoned": false, 00:15:34.998 "supported_io_types": { 00:15:34.998 "read": true, 00:15:34.998 "write": true, 00:15:34.998 "unmap": true, 00:15:34.998 "flush": true, 00:15:34.998 "reset": true, 00:15:34.998 "nvme_admin": false, 00:15:34.998 "nvme_io": false, 00:15:34.998 "nvme_io_md": false, 00:15:34.998 "write_zeroes": true, 00:15:34.998 "zcopy": false, 00:15:34.998 "get_zone_info": false, 00:15:34.998 "zone_management": false, 00:15:34.998 "zone_append": false, 00:15:34.998 "compare": false, 00:15:34.998 "compare_and_write": false, 00:15:34.998 "abort": false, 00:15:34.998 "seek_hole": false, 00:15:34.998 "seek_data": false, 00:15:34.998 "copy": false, 00:15:34.998 "nvme_iov_md": false 00:15:34.998 }, 00:15:34.998 "memory_domains": [ 00:15:34.998 { 00:15:34.998 "dma_device_id": "system", 00:15:34.998 "dma_device_type": 1 00:15:34.998 }, 00:15:34.998 { 00:15:34.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.998 "dma_device_type": 2 00:15:34.998 }, 00:15:34.998 { 00:15:34.998 "dma_device_id": "system", 00:15:34.998 "dma_device_type": 1 00:15:34.998 }, 00:15:34.998 { 00:15:34.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.998 "dma_device_type": 2 00:15:34.998 } 00:15:34.998 ], 00:15:34.998 "driver_specific": { 00:15:34.998 "raid": { 00:15:34.998 "uuid": "e3d2e152-9fbe-4fe4-a492-66f1446e3375", 00:15:34.998 "strip_size_kb": 64, 00:15:34.998 "state": "online", 00:15:34.998 "raid_level": "concat", 00:15:34.998 "superblock": true, 00:15:34.998 "num_base_bdevs": 2, 00:15:34.998 "num_base_bdevs_discovered": 2, 00:15:34.998 "num_base_bdevs_operational": 2, 00:15:34.998 "base_bdevs_list": [ 00:15:34.998 { 00:15:34.998 "name": "pt1", 00:15:34.998 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:34.998 "is_configured": true, 00:15:34.998 "data_offset": 2048, 00:15:34.998 "data_size": 63488 00:15:34.998 }, 00:15:34.998 { 00:15:34.998 "name": "pt2", 00:15:34.998 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.998 "is_configured": true, 00:15:34.998 "data_offset": 2048, 00:15:34.998 "data_size": 63488 00:15:34.998 } 00:15:34.998 ] 00:15:34.998 } 00:15:34.998 } 00:15:34.998 }' 00:15:34.998 04:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:34.998 04:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:34.998 pt2' 00:15:34.998 04:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:34.998 04:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:34.998 04:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:35.257 04:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:35.257 "name": "pt1", 00:15:35.257 "aliases": [ 00:15:35.257 "00000000-0000-0000-0000-000000000001" 00:15:35.257 ], 00:15:35.257 "product_name": "passthru", 00:15:35.257 "block_size": 512, 00:15:35.257 "num_blocks": 65536, 00:15:35.257 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:35.257 "assigned_rate_limits": { 00:15:35.257 "rw_ios_per_sec": 0, 00:15:35.257 "rw_mbytes_per_sec": 0, 00:15:35.257 "r_mbytes_per_sec": 0, 00:15:35.257 "w_mbytes_per_sec": 0 00:15:35.257 }, 00:15:35.257 "claimed": true, 00:15:35.257 "claim_type": "exclusive_write", 00:15:35.257 "zoned": false, 00:15:35.257 "supported_io_types": { 00:15:35.257 "read": true, 00:15:35.257 "write": true, 00:15:35.257 "unmap": true, 00:15:35.257 "flush": true, 00:15:35.257 "reset": true, 00:15:35.257 "nvme_admin": false, 00:15:35.257 "nvme_io": false, 00:15:35.257 "nvme_io_md": false, 00:15:35.257 "write_zeroes": true, 00:15:35.257 "zcopy": true, 00:15:35.257 "get_zone_info": false, 00:15:35.257 "zone_management": false, 00:15:35.257 "zone_append": false, 00:15:35.257 "compare": false, 00:15:35.257 "compare_and_write": false, 00:15:35.257 "abort": true, 00:15:35.257 "seek_hole": false, 00:15:35.257 "seek_data": false, 00:15:35.257 "copy": true, 00:15:35.257 "nvme_iov_md": false 00:15:35.257 }, 00:15:35.257 "memory_domains": [ 00:15:35.257 { 00:15:35.257 "dma_device_id": "system", 00:15:35.257 "dma_device_type": 1 00:15:35.257 }, 00:15:35.257 { 00:15:35.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.257 "dma_device_type": 2 00:15:35.257 } 00:15:35.257 ], 00:15:35.257 "driver_specific": { 00:15:35.257 "passthru": { 00:15:35.257 "name": "pt1", 00:15:35.257 "base_bdev_name": "malloc1" 00:15:35.257 } 00:15:35.257 } 00:15:35.257 }' 00:15:35.257 04:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.257 04:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.257 04:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:35.257 04:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:35.257 04:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:35.257 04:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:35.257 04:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:35.257 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:35.516 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:35.516 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.516 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.516 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:35.516 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:35.516 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:35.516 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:35.776 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:35.776 "name": "pt2", 00:15:35.776 "aliases": [ 00:15:35.776 "00000000-0000-0000-0000-000000000002" 00:15:35.776 ], 00:15:35.776 "product_name": "passthru", 00:15:35.776 "block_size": 512, 00:15:35.776 "num_blocks": 65536, 00:15:35.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.776 "assigned_rate_limits": { 00:15:35.776 "rw_ios_per_sec": 0, 00:15:35.776 "rw_mbytes_per_sec": 0, 00:15:35.776 "r_mbytes_per_sec": 0, 00:15:35.776 "w_mbytes_per_sec": 0 00:15:35.776 }, 00:15:35.776 "claimed": true, 00:15:35.776 "claim_type": "exclusive_write", 00:15:35.776 "zoned": false, 00:15:35.776 "supported_io_types": { 00:15:35.776 "read": true, 00:15:35.776 "write": true, 00:15:35.776 "unmap": true, 00:15:35.776 "flush": true, 00:15:35.776 "reset": true, 00:15:35.776 "nvme_admin": false, 00:15:35.776 "nvme_io": false, 00:15:35.776 "nvme_io_md": false, 00:15:35.776 "write_zeroes": true, 00:15:35.776 "zcopy": true, 00:15:35.776 "get_zone_info": false, 00:15:35.776 "zone_management": false, 00:15:35.776 "zone_append": false, 00:15:35.776 "compare": false, 00:15:35.776 "compare_and_write": false, 00:15:35.776 "abort": true, 00:15:35.776 "seek_hole": false, 00:15:35.776 "seek_data": false, 00:15:35.776 "copy": true, 00:15:35.776 "nvme_iov_md": false 00:15:35.776 }, 00:15:35.776 "memory_domains": [ 00:15:35.776 { 00:15:35.776 "dma_device_id": "system", 00:15:35.776 "dma_device_type": 1 00:15:35.776 }, 00:15:35.776 { 00:15:35.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.776 "dma_device_type": 2 00:15:35.776 } 00:15:35.776 ], 00:15:35.776 "driver_specific": { 00:15:35.776 "passthru": { 00:15:35.776 "name": "pt2", 00:15:35.776 "base_bdev_name": "malloc2" 00:15:35.776 } 00:15:35.776 } 00:15:35.776 }' 00:15:35.776 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.776 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.776 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:35.776 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:35.776 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:35.776 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:35.776 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.035 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.035 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:36.035 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.035 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.035 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:36.035 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:36.035 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:36.295 [2024-07-23 04:10:44.825103] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.295 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' e3d2e152-9fbe-4fe4-a492-66f1446e3375 '!=' e3d2e152-9fbe-4fe4-a492-66f1446e3375 ']' 00:15:36.295 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:15:36.295 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:36.295 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:36.295 04:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2627988 00:15:36.295 04:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2627988 ']' 00:15:36.295 04:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2627988 00:15:36.295 04:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:15:36.295 04:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:36.295 04:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2627988 00:15:36.295 04:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:36.295 04:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:36.295 04:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2627988' 00:15:36.295 killing process with pid 2627988 00:15:36.295 04:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2627988 00:15:36.295 [2024-07-23 04:10:44.888313] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:36.295 [2024-07-23 04:10:44.888415] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:36.295 [2024-07-23 04:10:44.888474] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:36.295 04:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2627988 00:15:36.295 [2024-07-23 04:10:44.888494] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:15:36.554 [2024-07-23 04:10:45.081060] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:38.461 04:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:15:38.461 00:15:38.461 real 0m11.707s 00:15:38.461 user 0m19.129s 00:15:38.461 sys 0m1.968s 00:15:38.461 04:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:38.461 04:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.461 ************************************ 00:15:38.461 END TEST raid_superblock_test 00:15:38.461 ************************************ 00:15:38.461 04:10:46 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:38.461 04:10:46 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:15:38.462 04:10:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:38.462 04:10:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:38.462 04:10:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:38.462 ************************************ 00:15:38.462 START TEST raid_read_error_test 00:15:38.462 ************************************ 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 read 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.oufUkJlqMJ 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2630298 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2630298 /var/tmp/spdk-raid.sock 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2630298 ']' 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:38.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:38.462 04:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.462 [2024-07-23 04:10:46.976669] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:38.462 [2024-07-23 04:10:46.976796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2630298 ] 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:38.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:38.462 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:38.462 [2024-07-23 04:10:47.189923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.722 [2024-07-23 04:10:47.472870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.290 [2024-07-23 04:10:47.811340] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.290 [2024-07-23 04:10:47.811377] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.290 04:10:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:39.290 04:10:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:39.290 04:10:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:39.290 04:10:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:39.549 BaseBdev1_malloc 00:15:39.549 04:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:39.808 true 00:15:39.808 04:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:40.068 [2024-07-23 04:10:48.712035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:40.068 [2024-07-23 04:10:48.712093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.068 [2024-07-23 04:10:48.712120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:15:40.068 [2024-07-23 04:10:48.712149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.068 [2024-07-23 04:10:48.714944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.068 [2024-07-23 04:10:48.714983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:40.068 BaseBdev1 00:15:40.068 04:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:40.068 04:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:40.328 BaseBdev2_malloc 00:15:40.328 04:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:40.587 true 00:15:40.587 04:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:40.847 [2024-07-23 04:10:49.432856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:40.847 [2024-07-23 04:10:49.432916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.847 [2024-07-23 04:10:49.432942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:15:40.847 [2024-07-23 04:10:49.432963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.847 [2024-07-23 04:10:49.435767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.847 [2024-07-23 04:10:49.435806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:40.847 BaseBdev2 00:15:40.847 04:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:41.106 [2024-07-23 04:10:49.657538] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:41.106 [2024-07-23 04:10:49.659910] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.106 [2024-07-23 04:10:49.660167] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040e80 00:15:41.106 [2024-07-23 04:10:49.660191] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:41.106 [2024-07-23 04:10:49.660554] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:15:41.106 [2024-07-23 04:10:49.660808] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040e80 00:15:41.106 [2024-07-23 04:10:49.660825] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040e80 00:15:41.106 [2024-07-23 04:10:49.661036] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.106 04:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:41.106 04:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:41.106 04:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:41.106 04:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:41.106 04:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:41.106 04:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:41.106 04:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:41.106 04:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:41.106 04:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:41.106 04:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:41.106 04:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.106 04:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.365 04:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:41.365 "name": "raid_bdev1", 00:15:41.365 "uuid": "03b8066b-315d-46a4-9d97-ce2d603265e7", 00:15:41.365 "strip_size_kb": 64, 00:15:41.365 "state": "online", 00:15:41.365 "raid_level": "concat", 00:15:41.365 "superblock": true, 00:15:41.365 "num_base_bdevs": 2, 00:15:41.365 "num_base_bdevs_discovered": 2, 00:15:41.365 "num_base_bdevs_operational": 2, 00:15:41.365 "base_bdevs_list": [ 00:15:41.365 { 00:15:41.365 "name": "BaseBdev1", 00:15:41.365 "uuid": "dc16fb34-3945-5055-b3cb-6d8337ea28b2", 00:15:41.365 "is_configured": true, 00:15:41.365 "data_offset": 2048, 00:15:41.365 "data_size": 63488 00:15:41.365 }, 00:15:41.365 { 00:15:41.365 "name": "BaseBdev2", 00:15:41.365 "uuid": "b33c6a2b-e1a1-5eb8-aae9-1cda2afab3ec", 00:15:41.365 "is_configured": true, 00:15:41.365 "data_offset": 2048, 00:15:41.365 "data_size": 63488 00:15:41.365 } 00:15:41.365 ] 00:15:41.365 }' 00:15:41.365 04:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:41.365 04:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.934 04:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:41.934 04:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:41.934 [2024-07-23 04:10:50.509738] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:15:42.871 04:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:42.871 04:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:42.871 04:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:42.871 04:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:15:42.871 04:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:42.871 04:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:42.871 04:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:42.871 04:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:42.871 04:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:42.871 04:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:42.871 04:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:42.871 04:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:42.871 04:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:42.871 04:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:42.871 04:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.871 04:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.131 04:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:43.131 "name": "raid_bdev1", 00:15:43.131 "uuid": "03b8066b-315d-46a4-9d97-ce2d603265e7", 00:15:43.131 "strip_size_kb": 64, 00:15:43.131 "state": "online", 00:15:43.131 "raid_level": "concat", 00:15:43.131 "superblock": true, 00:15:43.131 "num_base_bdevs": 2, 00:15:43.131 "num_base_bdevs_discovered": 2, 00:15:43.131 "num_base_bdevs_operational": 2, 00:15:43.131 "base_bdevs_list": [ 00:15:43.131 { 00:15:43.131 "name": "BaseBdev1", 00:15:43.131 "uuid": "dc16fb34-3945-5055-b3cb-6d8337ea28b2", 00:15:43.131 "is_configured": true, 00:15:43.131 "data_offset": 2048, 00:15:43.131 "data_size": 63488 00:15:43.131 }, 00:15:43.131 { 00:15:43.131 "name": "BaseBdev2", 00:15:43.131 "uuid": "b33c6a2b-e1a1-5eb8-aae9-1cda2afab3ec", 00:15:43.131 "is_configured": true, 00:15:43.131 "data_offset": 2048, 00:15:43.131 "data_size": 63488 00:15:43.131 } 00:15:43.131 ] 00:15:43.131 }' 00:15:43.131 04:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:43.131 04:10:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.069 04:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:44.069 [2024-07-23 04:10:52.692593] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:44.069 [2024-07-23 04:10:52.692636] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:44.069 [2024-07-23 04:10:52.695894] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.069 [2024-07-23 04:10:52.695945] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.069 [2024-07-23 04:10:52.695986] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:44.069 [2024-07-23 04:10:52.696012] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state offline 00:15:44.069 0 00:15:44.069 04:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2630298 00:15:44.069 04:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2630298 ']' 00:15:44.069 04:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2630298 00:15:44.069 04:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:15:44.069 04:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:44.069 04:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2630298 00:15:44.069 04:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:44.069 04:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:44.069 04:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2630298' 00:15:44.069 killing process with pid 2630298 00:15:44.069 04:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2630298 00:15:44.069 [2024-07-23 04:10:52.767357] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:44.069 04:10:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2630298 00:15:44.329 [2024-07-23 04:10:52.874677] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:46.234 04:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.oufUkJlqMJ 00:15:46.234 04:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:46.234 04:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:46.234 04:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.46 00:15:46.234 04:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:46.234 04:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:46.234 04:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:46.234 04:10:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.46 != \0\.\0\0 ]] 00:15:46.234 00:15:46.234 real 0m7.867s 00:15:46.234 user 0m10.982s 00:15:46.234 sys 0m1.120s 00:15:46.234 04:10:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:46.234 04:10:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.234 ************************************ 00:15:46.234 END TEST raid_read_error_test 00:15:46.234 ************************************ 00:15:46.234 04:10:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:46.234 04:10:54 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:15:46.234 04:10:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:46.234 04:10:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:46.234 04:10:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:46.234 ************************************ 00:15:46.234 START TEST raid_write_error_test 00:15:46.234 ************************************ 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 write 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.goE9SQY7UR 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2631720 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2631720 /var/tmp/spdk-raid.sock 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2631720 ']' 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:46.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:46.234 04:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.234 [2024-07-23 04:10:54.912965] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:46.234 [2024-07-23 04:10:54.913089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2631720 ] 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:46.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:46.494 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:46.494 [2024-07-23 04:10:55.135387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.753 [2024-07-23 04:10:55.418649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.013 [2024-07-23 04:10:55.782327] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.013 [2024-07-23 04:10:55.782362] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.272 04:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:47.272 04:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:47.272 04:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:47.272 04:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:47.530 BaseBdev1_malloc 00:15:47.530 04:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:47.789 true 00:15:47.789 04:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:47.789 [2024-07-23 04:10:56.522551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:47.789 [2024-07-23 04:10:56.522616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.789 [2024-07-23 04:10:56.522642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:15:47.789 [2024-07-23 04:10:56.522664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.789 [2024-07-23 04:10:56.525451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.789 [2024-07-23 04:10:56.525488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:47.789 BaseBdev1 00:15:47.789 04:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:47.789 04:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:48.049 BaseBdev2_malloc 00:15:48.049 04:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:48.308 true 00:15:48.308 04:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:48.567 [2024-07-23 04:10:57.104253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:48.567 [2024-07-23 04:10:57.104312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.567 [2024-07-23 04:10:57.104339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:15:48.567 [2024-07-23 04:10:57.104360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.567 [2024-07-23 04:10:57.107118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.567 [2024-07-23 04:10:57.107163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:48.567 BaseBdev2 00:15:48.567 04:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:48.567 [2024-07-23 04:10:57.316888] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:48.567 [2024-07-23 04:10:57.319286] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:48.567 [2024-07-23 04:10:57.319537] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040e80 00:15:48.567 [2024-07-23 04:10:57.319559] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:48.567 [2024-07-23 04:10:57.319897] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:15:48.567 [2024-07-23 04:10:57.320174] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040e80 00:15:48.567 [2024-07-23 04:10:57.320191] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040e80 00:15:48.567 [2024-07-23 04:10:57.320402] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.567 04:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:48.567 04:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:48.567 04:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:48.567 04:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:48.567 04:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:48.567 04:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:48.567 04:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:48.567 04:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:48.567 04:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:48.567 04:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:48.567 04:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.567 04:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.827 04:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:48.827 "name": "raid_bdev1", 00:15:48.827 "uuid": "3a703e6e-483a-4224-af44-7cb7a9f814d1", 00:15:48.827 "strip_size_kb": 64, 00:15:48.827 "state": "online", 00:15:48.827 "raid_level": "concat", 00:15:48.827 "superblock": true, 00:15:48.827 "num_base_bdevs": 2, 00:15:48.827 "num_base_bdevs_discovered": 2, 00:15:48.827 "num_base_bdevs_operational": 2, 00:15:48.827 "base_bdevs_list": [ 00:15:48.827 { 00:15:48.827 "name": "BaseBdev1", 00:15:48.827 "uuid": "10f94150-344c-5784-ae5e-a4f14ab0b66a", 00:15:48.827 "is_configured": true, 00:15:48.827 "data_offset": 2048, 00:15:48.827 "data_size": 63488 00:15:48.827 }, 00:15:48.827 { 00:15:48.827 "name": "BaseBdev2", 00:15:48.827 "uuid": "73eb211b-cf4f-54fd-bd89-afdd2ac08839", 00:15:48.827 "is_configured": true, 00:15:48.827 "data_offset": 2048, 00:15:48.827 "data_size": 63488 00:15:48.827 } 00:15:48.827 ] 00:15:48.827 }' 00:15:48.827 04:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:48.827 04:10:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.395 04:10:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:49.395 04:10:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:49.654 [2024-07-23 04:10:58.201201] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:15:50.592 04:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:50.592 04:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:50.592 04:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:50.592 04:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:15:50.592 04:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:50.592 04:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:50.592 04:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:50.592 04:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:50.592 04:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:50.592 04:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:50.592 04:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:50.592 04:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:50.592 04:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:50.592 04:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:50.592 04:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.593 04:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.852 04:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:50.852 "name": "raid_bdev1", 00:15:50.852 "uuid": "3a703e6e-483a-4224-af44-7cb7a9f814d1", 00:15:50.852 "strip_size_kb": 64, 00:15:50.852 "state": "online", 00:15:50.852 "raid_level": "concat", 00:15:50.852 "superblock": true, 00:15:50.852 "num_base_bdevs": 2, 00:15:50.852 "num_base_bdevs_discovered": 2, 00:15:50.852 "num_base_bdevs_operational": 2, 00:15:50.852 "base_bdevs_list": [ 00:15:50.852 { 00:15:50.852 "name": "BaseBdev1", 00:15:50.852 "uuid": "10f94150-344c-5784-ae5e-a4f14ab0b66a", 00:15:50.852 "is_configured": true, 00:15:50.852 "data_offset": 2048, 00:15:50.852 "data_size": 63488 00:15:50.852 }, 00:15:50.852 { 00:15:50.852 "name": "BaseBdev2", 00:15:50.852 "uuid": "73eb211b-cf4f-54fd-bd89-afdd2ac08839", 00:15:50.852 "is_configured": true, 00:15:50.852 "data_offset": 2048, 00:15:50.852 "data_size": 63488 00:15:50.852 } 00:15:50.852 ] 00:15:50.852 }' 00:15:50.852 04:10:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:50.852 04:10:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.568 04:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:51.568 [2024-07-23 04:11:00.343428] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.568 [2024-07-23 04:11:00.343477] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.568 [2024-07-23 04:11:00.346752] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.568 [2024-07-23 04:11:00.346810] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.568 [2024-07-23 04:11:00.346850] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.568 [2024-07-23 04:11:00.346872] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state offline 00:15:51.568 0 00:15:51.827 04:11:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2631720 00:15:51.827 04:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2631720 ']' 00:15:51.827 04:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2631720 00:15:51.827 04:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:15:51.827 04:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:51.827 04:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2631720 00:15:51.827 04:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:51.827 04:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:51.827 04:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2631720' 00:15:51.827 killing process with pid 2631720 00:15:51.827 04:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2631720 00:15:51.827 [2024-07-23 04:11:00.421454] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:51.827 04:11:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2631720 00:15:51.827 [2024-07-23 04:11:00.525358] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.735 04:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:53.735 04:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.goE9SQY7UR 00:15:53.735 04:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:53.735 04:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:15:53.735 04:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:53.735 04:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:53.735 04:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:53.735 04:11:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:15:53.735 00:15:53.735 real 0m7.562s 00:15:53.735 user 0m10.367s 00:15:53.735 sys 0m1.120s 00:15:53.735 04:11:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:53.735 04:11:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.735 ************************************ 00:15:53.735 END TEST raid_write_error_test 00:15:53.735 ************************************ 00:15:53.735 04:11:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:53.735 04:11:02 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:15:53.735 04:11:02 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:15:53.735 04:11:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:53.735 04:11:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.735 04:11:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.735 ************************************ 00:15:53.735 START TEST raid_state_function_test 00:15:53.735 ************************************ 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 false 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2633133 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2633133' 00:15:53.735 Process raid pid: 2633133 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2633133 /var/tmp/spdk-raid.sock 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2633133 ']' 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:53.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.735 04:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:53.995 [2024-07-23 04:11:02.643404] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:53.995 [2024-07-23 04:11:02.643646] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:54.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.255 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:54.255 [2024-07-23 04:11:03.010461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.515 [2024-07-23 04:11:03.283765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.085 [2024-07-23 04:11:03.627362] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.085 [2024-07-23 04:11:03.627404] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.085 04:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:55.085 04:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:15:55.085 04:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:55.345 [2024-07-23 04:11:04.000967] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:55.345 [2024-07-23 04:11:04.001024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:55.345 [2024-07-23 04:11:04.001039] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:55.345 [2024-07-23 04:11:04.001056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:55.345 04:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:55.345 04:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:55.345 04:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:55.345 04:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:55.345 04:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:55.345 04:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:55.345 04:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:55.345 04:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:55.345 04:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:55.345 04:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:55.345 04:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.345 04:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.914 04:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:55.914 "name": "Existed_Raid", 00:15:55.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.914 "strip_size_kb": 0, 00:15:55.914 "state": "configuring", 00:15:55.914 "raid_level": "raid1", 00:15:55.914 "superblock": false, 00:15:55.914 "num_base_bdevs": 2, 00:15:55.914 "num_base_bdevs_discovered": 0, 00:15:55.914 "num_base_bdevs_operational": 2, 00:15:55.914 "base_bdevs_list": [ 00:15:55.914 { 00:15:55.914 "name": "BaseBdev1", 00:15:55.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.914 "is_configured": false, 00:15:55.914 "data_offset": 0, 00:15:55.914 "data_size": 0 00:15:55.914 }, 00:15:55.914 { 00:15:55.914 "name": "BaseBdev2", 00:15:55.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.914 "is_configured": false, 00:15:55.914 "data_offset": 0, 00:15:55.914 "data_size": 0 00:15:55.914 } 00:15:55.914 ] 00:15:55.914 }' 00:15:55.914 04:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:55.914 04:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.484 04:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:56.743 [2024-07-23 04:11:05.268241] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:56.743 [2024-07-23 04:11:05.268283] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:15:56.743 04:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:56.743 [2024-07-23 04:11:05.484842] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:56.743 [2024-07-23 04:11:05.484890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:56.743 [2024-07-23 04:11:05.484903] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:56.743 [2024-07-23 04:11:05.484920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:56.743 04:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:57.312 [2024-07-23 04:11:06.042545] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:57.312 BaseBdev1 00:15:57.312 04:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:57.312 04:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:57.313 04:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:57.313 04:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:57.313 04:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:57.313 04:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:57.313 04:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:57.572 04:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:58.141 [ 00:15:58.141 { 00:15:58.141 "name": "BaseBdev1", 00:15:58.141 "aliases": [ 00:15:58.141 "0cb513ad-8799-4246-8539-bdb8e39fe4d2" 00:15:58.141 ], 00:15:58.141 "product_name": "Malloc disk", 00:15:58.141 "block_size": 512, 00:15:58.141 "num_blocks": 65536, 00:15:58.141 "uuid": "0cb513ad-8799-4246-8539-bdb8e39fe4d2", 00:15:58.141 "assigned_rate_limits": { 00:15:58.141 "rw_ios_per_sec": 0, 00:15:58.141 "rw_mbytes_per_sec": 0, 00:15:58.141 "r_mbytes_per_sec": 0, 00:15:58.141 "w_mbytes_per_sec": 0 00:15:58.141 }, 00:15:58.141 "claimed": true, 00:15:58.141 "claim_type": "exclusive_write", 00:15:58.141 "zoned": false, 00:15:58.141 "supported_io_types": { 00:15:58.141 "read": true, 00:15:58.141 "write": true, 00:15:58.141 "unmap": true, 00:15:58.141 "flush": true, 00:15:58.141 "reset": true, 00:15:58.141 "nvme_admin": false, 00:15:58.141 "nvme_io": false, 00:15:58.141 "nvme_io_md": false, 00:15:58.141 "write_zeroes": true, 00:15:58.141 "zcopy": true, 00:15:58.141 "get_zone_info": false, 00:15:58.141 "zone_management": false, 00:15:58.141 "zone_append": false, 00:15:58.141 "compare": false, 00:15:58.141 "compare_and_write": false, 00:15:58.141 "abort": true, 00:15:58.141 "seek_hole": false, 00:15:58.141 "seek_data": false, 00:15:58.141 "copy": true, 00:15:58.141 "nvme_iov_md": false 00:15:58.141 }, 00:15:58.141 "memory_domains": [ 00:15:58.141 { 00:15:58.141 "dma_device_id": "system", 00:15:58.141 "dma_device_type": 1 00:15:58.141 }, 00:15:58.141 { 00:15:58.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.141 "dma_device_type": 2 00:15:58.141 } 00:15:58.141 ], 00:15:58.141 "driver_specific": {} 00:15:58.141 } 00:15:58.141 ] 00:15:58.141 04:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:58.141 04:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:58.141 04:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:58.141 04:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:58.141 04:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:58.141 04:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:58.141 04:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:58.141 04:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:58.141 04:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:58.141 04:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:58.141 04:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:58.141 04:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.141 04:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.401 04:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:58.401 "name": "Existed_Raid", 00:15:58.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.401 "strip_size_kb": 0, 00:15:58.401 "state": "configuring", 00:15:58.401 "raid_level": "raid1", 00:15:58.401 "superblock": false, 00:15:58.401 "num_base_bdevs": 2, 00:15:58.401 "num_base_bdevs_discovered": 1, 00:15:58.401 "num_base_bdevs_operational": 2, 00:15:58.401 "base_bdevs_list": [ 00:15:58.401 { 00:15:58.401 "name": "BaseBdev1", 00:15:58.401 "uuid": "0cb513ad-8799-4246-8539-bdb8e39fe4d2", 00:15:58.401 "is_configured": true, 00:15:58.401 "data_offset": 0, 00:15:58.401 "data_size": 65536 00:15:58.401 }, 00:15:58.401 { 00:15:58.401 "name": "BaseBdev2", 00:15:58.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.401 "is_configured": false, 00:15:58.401 "data_offset": 0, 00:15:58.401 "data_size": 0 00:15:58.401 } 00:15:58.401 ] 00:15:58.401 }' 00:15:58.401 04:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:58.401 04:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.338 04:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:59.338 [2024-07-23 04:11:08.080089] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:59.338 [2024-07-23 04:11:08.080157] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:15:59.338 04:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:59.598 [2024-07-23 04:11:08.308790] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.598 [2024-07-23 04:11:08.311160] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:59.598 [2024-07-23 04:11:08.311205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:59.598 04:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:59.598 04:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:59.598 04:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:59.598 04:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:59.598 04:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:59.598 04:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:59.598 04:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:59.598 04:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:59.598 04:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:59.598 04:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:59.598 04:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:59.598 04:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:59.598 04:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.598 04:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.858 04:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:59.858 "name": "Existed_Raid", 00:15:59.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.858 "strip_size_kb": 0, 00:15:59.858 "state": "configuring", 00:15:59.858 "raid_level": "raid1", 00:15:59.858 "superblock": false, 00:15:59.858 "num_base_bdevs": 2, 00:15:59.858 "num_base_bdevs_discovered": 1, 00:15:59.858 "num_base_bdevs_operational": 2, 00:15:59.858 "base_bdevs_list": [ 00:15:59.858 { 00:15:59.858 "name": "BaseBdev1", 00:15:59.858 "uuid": "0cb513ad-8799-4246-8539-bdb8e39fe4d2", 00:15:59.858 "is_configured": true, 00:15:59.858 "data_offset": 0, 00:15:59.858 "data_size": 65536 00:15:59.858 }, 00:15:59.858 { 00:15:59.858 "name": "BaseBdev2", 00:15:59.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.858 "is_configured": false, 00:15:59.858 "data_offset": 0, 00:15:59.858 "data_size": 0 00:15:59.858 } 00:15:59.858 ] 00:15:59.858 }' 00:15:59.858 04:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:59.858 04:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.426 04:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:00.686 [2024-07-23 04:11:09.295137] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:00.686 [2024-07-23 04:11:09.295207] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:16:00.686 [2024-07-23 04:11:09.295224] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:00.686 [2024-07-23 04:11:09.295549] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:16:00.686 [2024-07-23 04:11:09.295772] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:16:00.686 [2024-07-23 04:11:09.295790] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:16:00.686 [2024-07-23 04:11:09.296116] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.686 BaseBdev2 00:16:00.686 04:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:00.686 04:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:00.686 04:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:00.686 04:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:00.686 04:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:00.686 04:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:00.686 04:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:00.945 04:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:01.514 [ 00:16:01.514 { 00:16:01.514 "name": "BaseBdev2", 00:16:01.514 "aliases": [ 00:16:01.514 "1c3b6a66-82c2-4014-9062-dd569d3474c8" 00:16:01.514 ], 00:16:01.514 "product_name": "Malloc disk", 00:16:01.514 "block_size": 512, 00:16:01.514 "num_blocks": 65536, 00:16:01.514 "uuid": "1c3b6a66-82c2-4014-9062-dd569d3474c8", 00:16:01.514 "assigned_rate_limits": { 00:16:01.514 "rw_ios_per_sec": 0, 00:16:01.514 "rw_mbytes_per_sec": 0, 00:16:01.514 "r_mbytes_per_sec": 0, 00:16:01.514 "w_mbytes_per_sec": 0 00:16:01.514 }, 00:16:01.514 "claimed": true, 00:16:01.514 "claim_type": "exclusive_write", 00:16:01.514 "zoned": false, 00:16:01.514 "supported_io_types": { 00:16:01.514 "read": true, 00:16:01.514 "write": true, 00:16:01.514 "unmap": true, 00:16:01.514 "flush": true, 00:16:01.514 "reset": true, 00:16:01.514 "nvme_admin": false, 00:16:01.514 "nvme_io": false, 00:16:01.514 "nvme_io_md": false, 00:16:01.514 "write_zeroes": true, 00:16:01.514 "zcopy": true, 00:16:01.514 "get_zone_info": false, 00:16:01.514 "zone_management": false, 00:16:01.514 "zone_append": false, 00:16:01.514 "compare": false, 00:16:01.514 "compare_and_write": false, 00:16:01.514 "abort": true, 00:16:01.514 "seek_hole": false, 00:16:01.514 "seek_data": false, 00:16:01.514 "copy": true, 00:16:01.514 "nvme_iov_md": false 00:16:01.514 }, 00:16:01.514 "memory_domains": [ 00:16:01.514 { 00:16:01.514 "dma_device_id": "system", 00:16:01.514 "dma_device_type": 1 00:16:01.514 }, 00:16:01.514 { 00:16:01.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.514 "dma_device_type": 2 00:16:01.514 } 00:16:01.514 ], 00:16:01.514 "driver_specific": {} 00:16:01.514 } 00:16:01.514 ] 00:16:01.514 04:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:01.514 04:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:01.514 04:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:01.514 04:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:01.514 04:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:01.514 04:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:01.514 04:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:01.514 04:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:01.514 04:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:01.514 04:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:01.514 04:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:01.514 04:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:01.514 04:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:01.514 04:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.514 04:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.514 04:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:01.514 "name": "Existed_Raid", 00:16:01.514 "uuid": "e616270c-6d99-4d5e-8d13-b36c8fe62dd8", 00:16:01.514 "strip_size_kb": 0, 00:16:01.514 "state": "online", 00:16:01.514 "raid_level": "raid1", 00:16:01.514 "superblock": false, 00:16:01.515 "num_base_bdevs": 2, 00:16:01.515 "num_base_bdevs_discovered": 2, 00:16:01.515 "num_base_bdevs_operational": 2, 00:16:01.515 "base_bdevs_list": [ 00:16:01.515 { 00:16:01.515 "name": "BaseBdev1", 00:16:01.515 "uuid": "0cb513ad-8799-4246-8539-bdb8e39fe4d2", 00:16:01.515 "is_configured": true, 00:16:01.515 "data_offset": 0, 00:16:01.515 "data_size": 65536 00:16:01.515 }, 00:16:01.515 { 00:16:01.515 "name": "BaseBdev2", 00:16:01.515 "uuid": "1c3b6a66-82c2-4014-9062-dd569d3474c8", 00:16:01.515 "is_configured": true, 00:16:01.515 "data_offset": 0, 00:16:01.515 "data_size": 65536 00:16:01.515 } 00:16:01.515 ] 00:16:01.515 }' 00:16:01.515 04:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:01.515 04:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.453 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:02.453 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:02.453 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:02.453 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:02.453 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:02.453 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:02.453 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:02.453 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:02.713 [2024-07-23 04:11:11.325050] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:02.713 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:02.713 "name": "Existed_Raid", 00:16:02.713 "aliases": [ 00:16:02.713 "e616270c-6d99-4d5e-8d13-b36c8fe62dd8" 00:16:02.713 ], 00:16:02.713 "product_name": "Raid Volume", 00:16:02.713 "block_size": 512, 00:16:02.713 "num_blocks": 65536, 00:16:02.713 "uuid": "e616270c-6d99-4d5e-8d13-b36c8fe62dd8", 00:16:02.713 "assigned_rate_limits": { 00:16:02.713 "rw_ios_per_sec": 0, 00:16:02.713 "rw_mbytes_per_sec": 0, 00:16:02.713 "r_mbytes_per_sec": 0, 00:16:02.713 "w_mbytes_per_sec": 0 00:16:02.713 }, 00:16:02.713 "claimed": false, 00:16:02.713 "zoned": false, 00:16:02.713 "supported_io_types": { 00:16:02.713 "read": true, 00:16:02.713 "write": true, 00:16:02.713 "unmap": false, 00:16:02.713 "flush": false, 00:16:02.713 "reset": true, 00:16:02.713 "nvme_admin": false, 00:16:02.713 "nvme_io": false, 00:16:02.713 "nvme_io_md": false, 00:16:02.713 "write_zeroes": true, 00:16:02.713 "zcopy": false, 00:16:02.713 "get_zone_info": false, 00:16:02.713 "zone_management": false, 00:16:02.713 "zone_append": false, 00:16:02.713 "compare": false, 00:16:02.713 "compare_and_write": false, 00:16:02.713 "abort": false, 00:16:02.713 "seek_hole": false, 00:16:02.713 "seek_data": false, 00:16:02.713 "copy": false, 00:16:02.713 "nvme_iov_md": false 00:16:02.713 }, 00:16:02.713 "memory_domains": [ 00:16:02.713 { 00:16:02.713 "dma_device_id": "system", 00:16:02.713 "dma_device_type": 1 00:16:02.713 }, 00:16:02.713 { 00:16:02.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.713 "dma_device_type": 2 00:16:02.713 }, 00:16:02.713 { 00:16:02.713 "dma_device_id": "system", 00:16:02.713 "dma_device_type": 1 00:16:02.713 }, 00:16:02.713 { 00:16:02.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.713 "dma_device_type": 2 00:16:02.713 } 00:16:02.713 ], 00:16:02.713 "driver_specific": { 00:16:02.713 "raid": { 00:16:02.713 "uuid": "e616270c-6d99-4d5e-8d13-b36c8fe62dd8", 00:16:02.713 "strip_size_kb": 0, 00:16:02.713 "state": "online", 00:16:02.713 "raid_level": "raid1", 00:16:02.713 "superblock": false, 00:16:02.713 "num_base_bdevs": 2, 00:16:02.713 "num_base_bdevs_discovered": 2, 00:16:02.713 "num_base_bdevs_operational": 2, 00:16:02.713 "base_bdevs_list": [ 00:16:02.713 { 00:16:02.713 "name": "BaseBdev1", 00:16:02.713 "uuid": "0cb513ad-8799-4246-8539-bdb8e39fe4d2", 00:16:02.713 "is_configured": true, 00:16:02.713 "data_offset": 0, 00:16:02.713 "data_size": 65536 00:16:02.713 }, 00:16:02.713 { 00:16:02.713 "name": "BaseBdev2", 00:16:02.713 "uuid": "1c3b6a66-82c2-4014-9062-dd569d3474c8", 00:16:02.713 "is_configured": true, 00:16:02.713 "data_offset": 0, 00:16:02.713 "data_size": 65536 00:16:02.713 } 00:16:02.713 ] 00:16:02.713 } 00:16:02.713 } 00:16:02.713 }' 00:16:02.713 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:02.713 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:02.713 BaseBdev2' 00:16:02.713 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:02.713 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:02.713 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:02.972 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:02.972 "name": "BaseBdev1", 00:16:02.972 "aliases": [ 00:16:02.972 "0cb513ad-8799-4246-8539-bdb8e39fe4d2" 00:16:02.972 ], 00:16:02.972 "product_name": "Malloc disk", 00:16:02.972 "block_size": 512, 00:16:02.972 "num_blocks": 65536, 00:16:02.972 "uuid": "0cb513ad-8799-4246-8539-bdb8e39fe4d2", 00:16:02.972 "assigned_rate_limits": { 00:16:02.972 "rw_ios_per_sec": 0, 00:16:02.972 "rw_mbytes_per_sec": 0, 00:16:02.972 "r_mbytes_per_sec": 0, 00:16:02.972 "w_mbytes_per_sec": 0 00:16:02.972 }, 00:16:02.972 "claimed": true, 00:16:02.972 "claim_type": "exclusive_write", 00:16:02.972 "zoned": false, 00:16:02.972 "supported_io_types": { 00:16:02.972 "read": true, 00:16:02.972 "write": true, 00:16:02.972 "unmap": true, 00:16:02.972 "flush": true, 00:16:02.972 "reset": true, 00:16:02.972 "nvme_admin": false, 00:16:02.972 "nvme_io": false, 00:16:02.972 "nvme_io_md": false, 00:16:02.972 "write_zeroes": true, 00:16:02.972 "zcopy": true, 00:16:02.972 "get_zone_info": false, 00:16:02.972 "zone_management": false, 00:16:02.972 "zone_append": false, 00:16:02.972 "compare": false, 00:16:02.972 "compare_and_write": false, 00:16:02.972 "abort": true, 00:16:02.972 "seek_hole": false, 00:16:02.972 "seek_data": false, 00:16:02.972 "copy": true, 00:16:02.972 "nvme_iov_md": false 00:16:02.972 }, 00:16:02.972 "memory_domains": [ 00:16:02.972 { 00:16:02.972 "dma_device_id": "system", 00:16:02.972 "dma_device_type": 1 00:16:02.972 }, 00:16:02.972 { 00:16:02.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.972 "dma_device_type": 2 00:16:02.972 } 00:16:02.972 ], 00:16:02.972 "driver_specific": {} 00:16:02.972 }' 00:16:02.972 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:02.972 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:02.972 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:02.972 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:02.972 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:03.231 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:03.231 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:03.231 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:03.231 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:03.231 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:03.231 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:03.232 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:03.232 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:03.232 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:03.232 04:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:03.490 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:03.490 "name": "BaseBdev2", 00:16:03.490 "aliases": [ 00:16:03.490 "1c3b6a66-82c2-4014-9062-dd569d3474c8" 00:16:03.490 ], 00:16:03.490 "product_name": "Malloc disk", 00:16:03.490 "block_size": 512, 00:16:03.490 "num_blocks": 65536, 00:16:03.490 "uuid": "1c3b6a66-82c2-4014-9062-dd569d3474c8", 00:16:03.490 "assigned_rate_limits": { 00:16:03.490 "rw_ios_per_sec": 0, 00:16:03.490 "rw_mbytes_per_sec": 0, 00:16:03.490 "r_mbytes_per_sec": 0, 00:16:03.490 "w_mbytes_per_sec": 0 00:16:03.490 }, 00:16:03.490 "claimed": true, 00:16:03.490 "claim_type": "exclusive_write", 00:16:03.490 "zoned": false, 00:16:03.490 "supported_io_types": { 00:16:03.490 "read": true, 00:16:03.490 "write": true, 00:16:03.490 "unmap": true, 00:16:03.490 "flush": true, 00:16:03.490 "reset": true, 00:16:03.490 "nvme_admin": false, 00:16:03.490 "nvme_io": false, 00:16:03.490 "nvme_io_md": false, 00:16:03.490 "write_zeroes": true, 00:16:03.490 "zcopy": true, 00:16:03.490 "get_zone_info": false, 00:16:03.490 "zone_management": false, 00:16:03.490 "zone_append": false, 00:16:03.490 "compare": false, 00:16:03.490 "compare_and_write": false, 00:16:03.490 "abort": true, 00:16:03.490 "seek_hole": false, 00:16:03.490 "seek_data": false, 00:16:03.490 "copy": true, 00:16:03.490 "nvme_iov_md": false 00:16:03.490 }, 00:16:03.490 "memory_domains": [ 00:16:03.490 { 00:16:03.490 "dma_device_id": "system", 00:16:03.490 "dma_device_type": 1 00:16:03.490 }, 00:16:03.490 { 00:16:03.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.490 "dma_device_type": 2 00:16:03.490 } 00:16:03.490 ], 00:16:03.490 "driver_specific": {} 00:16:03.490 }' 00:16:03.490 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:03.490 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:03.748 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:03.748 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:03.748 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:03.748 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:03.748 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:03.748 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:03.748 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:03.748 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:03.748 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:03.748 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:03.748 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:04.007 [2024-07-23 04:11:12.660365] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:04.007 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:04.007 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:04.007 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:04.007 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:04.007 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:04.007 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:04.007 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:04.008 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:04.008 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:04.008 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:04.008 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:04.008 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:04.008 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:04.008 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:04.008 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:04.008 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.008 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.267 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:04.267 "name": "Existed_Raid", 00:16:04.267 "uuid": "e616270c-6d99-4d5e-8d13-b36c8fe62dd8", 00:16:04.267 "strip_size_kb": 0, 00:16:04.267 "state": "online", 00:16:04.267 "raid_level": "raid1", 00:16:04.267 "superblock": false, 00:16:04.267 "num_base_bdevs": 2, 00:16:04.267 "num_base_bdevs_discovered": 1, 00:16:04.267 "num_base_bdevs_operational": 1, 00:16:04.267 "base_bdevs_list": [ 00:16:04.267 { 00:16:04.267 "name": null, 00:16:04.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.267 "is_configured": false, 00:16:04.267 "data_offset": 0, 00:16:04.267 "data_size": 65536 00:16:04.267 }, 00:16:04.267 { 00:16:04.267 "name": "BaseBdev2", 00:16:04.267 "uuid": "1c3b6a66-82c2-4014-9062-dd569d3474c8", 00:16:04.267 "is_configured": true, 00:16:04.267 "data_offset": 0, 00:16:04.267 "data_size": 65536 00:16:04.267 } 00:16:04.267 ] 00:16:04.267 }' 00:16:04.267 04:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:04.267 04:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.832 04:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:04.833 04:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:04.833 04:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.833 04:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:05.091 04:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:05.091 04:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:05.091 04:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:05.091 [2024-07-23 04:11:13.851145] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:05.091 [2024-07-23 04:11:13.851254] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.352 [2024-07-23 04:11:13.981825] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.352 [2024-07-23 04:11:13.981876] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.352 [2024-07-23 04:11:13.981895] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:16:05.352 04:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:05.352 04:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:05.352 04:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.352 04:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:05.611 04:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:05.611 04:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:05.611 04:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:05.611 04:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2633133 00:16:05.611 04:11:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2633133 ']' 00:16:05.611 04:11:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2633133 00:16:05.611 04:11:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:16:05.611 04:11:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:05.611 04:11:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2633133 00:16:05.611 04:11:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:05.611 04:11:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:05.611 04:11:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2633133' 00:16:05.611 killing process with pid 2633133 00:16:05.611 04:11:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2633133 00:16:05.611 [2024-07-23 04:11:14.285920] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:05.611 04:11:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2633133 00:16:05.611 [2024-07-23 04:11:14.310671] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:07.517 00:16:07.517 real 0m13.620s 00:16:07.517 user 0m22.205s 00:16:07.517 sys 0m2.333s 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.517 ************************************ 00:16:07.517 END TEST raid_state_function_test 00:16:07.517 ************************************ 00:16:07.517 04:11:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:07.517 04:11:16 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:16:07.517 04:11:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:07.517 04:11:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:07.517 04:11:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:07.517 ************************************ 00:16:07.517 START TEST raid_state_function_test_sb 00:16:07.517 ************************************ 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2635680 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2635680' 00:16:07.517 Process raid pid: 2635680 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2635680 /var/tmp/spdk-raid.sock 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2635680 ']' 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:07.517 04:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:07.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:07.518 04:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:07.518 04:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.518 [2024-07-23 04:11:16.255360] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:07.518 [2024-07-23 04:11:16.255477] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:07.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:07.777 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:07.777 [2024-07-23 04:11:16.484927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.036 [2024-07-23 04:11:16.770012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.635 [2024-07-23 04:11:17.123757] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.635 [2024-07-23 04:11:17.123798] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.635 04:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:08.635 04:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:16:08.635 04:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:08.892 [2024-07-23 04:11:17.520069] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:08.892 [2024-07-23 04:11:17.520126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:08.892 [2024-07-23 04:11:17.520148] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:08.892 [2024-07-23 04:11:17.520166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:08.892 04:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:08.892 04:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:08.892 04:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:08.892 04:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:08.892 04:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:08.892 04:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:08.892 04:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:08.892 04:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:08.892 04:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:08.892 04:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:08.892 04:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.892 04:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.151 04:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:09.151 "name": "Existed_Raid", 00:16:09.151 "uuid": "d798688f-e351-4f64-97a1-32d9151a9284", 00:16:09.151 "strip_size_kb": 0, 00:16:09.151 "state": "configuring", 00:16:09.151 "raid_level": "raid1", 00:16:09.151 "superblock": true, 00:16:09.151 "num_base_bdevs": 2, 00:16:09.151 "num_base_bdevs_discovered": 0, 00:16:09.151 "num_base_bdevs_operational": 2, 00:16:09.151 "base_bdevs_list": [ 00:16:09.151 { 00:16:09.151 "name": "BaseBdev1", 00:16:09.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.151 "is_configured": false, 00:16:09.151 "data_offset": 0, 00:16:09.151 "data_size": 0 00:16:09.151 }, 00:16:09.151 { 00:16:09.151 "name": "BaseBdev2", 00:16:09.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.151 "is_configured": false, 00:16:09.151 "data_offset": 0, 00:16:09.151 "data_size": 0 00:16:09.151 } 00:16:09.151 ] 00:16:09.151 }' 00:16:09.151 04:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:09.151 04:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.086 04:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:10.087 [2024-07-23 04:11:18.811358] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:10.087 [2024-07-23 04:11:18.811402] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:16:10.087 04:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:10.345 [2024-07-23 04:11:19.032016] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:10.345 [2024-07-23 04:11:19.032061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:10.345 [2024-07-23 04:11:19.032075] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:10.345 [2024-07-23 04:11:19.032092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:10.345 04:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:10.603 [2024-07-23 04:11:19.310044] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.603 BaseBdev1 00:16:10.603 04:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:10.603 04:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:10.603 04:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:10.603 04:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:10.603 04:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:10.603 04:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:10.603 04:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:10.861 04:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:11.120 [ 00:16:11.120 { 00:16:11.120 "name": "BaseBdev1", 00:16:11.120 "aliases": [ 00:16:11.120 "04a9b9bd-0463-420d-991f-a7cb34a1e799" 00:16:11.120 ], 00:16:11.120 "product_name": "Malloc disk", 00:16:11.120 "block_size": 512, 00:16:11.120 "num_blocks": 65536, 00:16:11.120 "uuid": "04a9b9bd-0463-420d-991f-a7cb34a1e799", 00:16:11.120 "assigned_rate_limits": { 00:16:11.120 "rw_ios_per_sec": 0, 00:16:11.120 "rw_mbytes_per_sec": 0, 00:16:11.120 "r_mbytes_per_sec": 0, 00:16:11.120 "w_mbytes_per_sec": 0 00:16:11.120 }, 00:16:11.120 "claimed": true, 00:16:11.120 "claim_type": "exclusive_write", 00:16:11.120 "zoned": false, 00:16:11.120 "supported_io_types": { 00:16:11.120 "read": true, 00:16:11.120 "write": true, 00:16:11.120 "unmap": true, 00:16:11.120 "flush": true, 00:16:11.120 "reset": true, 00:16:11.120 "nvme_admin": false, 00:16:11.120 "nvme_io": false, 00:16:11.120 "nvme_io_md": false, 00:16:11.120 "write_zeroes": true, 00:16:11.120 "zcopy": true, 00:16:11.120 "get_zone_info": false, 00:16:11.120 "zone_management": false, 00:16:11.120 "zone_append": false, 00:16:11.120 "compare": false, 00:16:11.120 "compare_and_write": false, 00:16:11.120 "abort": true, 00:16:11.120 "seek_hole": false, 00:16:11.120 "seek_data": false, 00:16:11.120 "copy": true, 00:16:11.120 "nvme_iov_md": false 00:16:11.120 }, 00:16:11.120 "memory_domains": [ 00:16:11.120 { 00:16:11.120 "dma_device_id": "system", 00:16:11.120 "dma_device_type": 1 00:16:11.120 }, 00:16:11.120 { 00:16:11.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.120 "dma_device_type": 2 00:16:11.120 } 00:16:11.120 ], 00:16:11.120 "driver_specific": {} 00:16:11.120 } 00:16:11.120 ] 00:16:11.120 04:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:11.120 04:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:11.120 04:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:11.120 04:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:11.120 04:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:11.120 04:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:11.120 04:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:11.120 04:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:11.120 04:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:11.120 04:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:11.120 04:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:11.120 04:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.120 04:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.379 04:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:11.379 "name": "Existed_Raid", 00:16:11.379 "uuid": "8740051f-082d-42ae-9073-a6186bcca873", 00:16:11.379 "strip_size_kb": 0, 00:16:11.379 "state": "configuring", 00:16:11.379 "raid_level": "raid1", 00:16:11.379 "superblock": true, 00:16:11.379 "num_base_bdevs": 2, 00:16:11.379 "num_base_bdevs_discovered": 1, 00:16:11.379 "num_base_bdevs_operational": 2, 00:16:11.379 "base_bdevs_list": [ 00:16:11.379 { 00:16:11.379 "name": "BaseBdev1", 00:16:11.379 "uuid": "04a9b9bd-0463-420d-991f-a7cb34a1e799", 00:16:11.379 "is_configured": true, 00:16:11.379 "data_offset": 2048, 00:16:11.379 "data_size": 63488 00:16:11.379 }, 00:16:11.379 { 00:16:11.379 "name": "BaseBdev2", 00:16:11.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.379 "is_configured": false, 00:16:11.379 "data_offset": 0, 00:16:11.379 "data_size": 0 00:16:11.379 } 00:16:11.379 ] 00:16:11.379 }' 00:16:11.379 04:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:11.379 04:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.947 04:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:12.205 [2024-07-23 04:11:20.786077] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:12.205 [2024-07-23 04:11:20.786149] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:16:12.205 04:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:12.464 [2024-07-23 04:11:20.998755] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:12.464 [2024-07-23 04:11:21.001084] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:12.464 [2024-07-23 04:11:21.001131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:12.464 04:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:12.464 04:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:12.464 04:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:12.464 04:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:12.464 04:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:12.464 04:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:12.464 04:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:12.464 04:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:12.464 04:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:12.464 04:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:12.464 04:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:12.464 04:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:12.464 04:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.464 04:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.723 04:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:12.723 "name": "Existed_Raid", 00:16:12.723 "uuid": "1d539d05-eb10-4a1d-b4c2-50e3c479fbf1", 00:16:12.723 "strip_size_kb": 0, 00:16:12.723 "state": "configuring", 00:16:12.723 "raid_level": "raid1", 00:16:12.723 "superblock": true, 00:16:12.723 "num_base_bdevs": 2, 00:16:12.723 "num_base_bdevs_discovered": 1, 00:16:12.723 "num_base_bdevs_operational": 2, 00:16:12.723 "base_bdevs_list": [ 00:16:12.723 { 00:16:12.723 "name": "BaseBdev1", 00:16:12.723 "uuid": "04a9b9bd-0463-420d-991f-a7cb34a1e799", 00:16:12.723 "is_configured": true, 00:16:12.723 "data_offset": 2048, 00:16:12.723 "data_size": 63488 00:16:12.723 }, 00:16:12.723 { 00:16:12.723 "name": "BaseBdev2", 00:16:12.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.723 "is_configured": false, 00:16:12.723 "data_offset": 0, 00:16:12.723 "data_size": 0 00:16:12.723 } 00:16:12.723 ] 00:16:12.723 }' 00:16:12.723 04:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:12.723 04:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.290 04:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:13.550 [2024-07-23 04:11:22.088216] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:13.550 [2024-07-23 04:11:22.088504] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:16:13.550 [2024-07-23 04:11:22.088529] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:13.550 [2024-07-23 04:11:22.088867] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:16:13.550 [2024-07-23 04:11:22.089094] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:16:13.550 [2024-07-23 04:11:22.089112] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:16:13.550 [2024-07-23 04:11:22.089305] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.550 BaseBdev2 00:16:13.550 04:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:13.550 04:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:13.550 04:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:13.550 04:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:13.550 04:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:13.550 04:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:13.550 04:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:13.550 04:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:13.809 [ 00:16:13.809 { 00:16:13.809 "name": "BaseBdev2", 00:16:13.809 "aliases": [ 00:16:13.809 "f2cd953e-107f-4e7d-87e0-a4d6fdb34007" 00:16:13.810 ], 00:16:13.810 "product_name": "Malloc disk", 00:16:13.810 "block_size": 512, 00:16:13.810 "num_blocks": 65536, 00:16:13.810 "uuid": "f2cd953e-107f-4e7d-87e0-a4d6fdb34007", 00:16:13.810 "assigned_rate_limits": { 00:16:13.810 "rw_ios_per_sec": 0, 00:16:13.810 "rw_mbytes_per_sec": 0, 00:16:13.810 "r_mbytes_per_sec": 0, 00:16:13.810 "w_mbytes_per_sec": 0 00:16:13.810 }, 00:16:13.810 "claimed": true, 00:16:13.810 "claim_type": "exclusive_write", 00:16:13.810 "zoned": false, 00:16:13.810 "supported_io_types": { 00:16:13.810 "read": true, 00:16:13.810 "write": true, 00:16:13.810 "unmap": true, 00:16:13.810 "flush": true, 00:16:13.810 "reset": true, 00:16:13.810 "nvme_admin": false, 00:16:13.810 "nvme_io": false, 00:16:13.810 "nvme_io_md": false, 00:16:13.810 "write_zeroes": true, 00:16:13.810 "zcopy": true, 00:16:13.810 "get_zone_info": false, 00:16:13.810 "zone_management": false, 00:16:13.810 "zone_append": false, 00:16:13.810 "compare": false, 00:16:13.810 "compare_and_write": false, 00:16:13.810 "abort": true, 00:16:13.810 "seek_hole": false, 00:16:13.810 "seek_data": false, 00:16:13.810 "copy": true, 00:16:13.810 "nvme_iov_md": false 00:16:13.810 }, 00:16:13.810 "memory_domains": [ 00:16:13.810 { 00:16:13.810 "dma_device_id": "system", 00:16:13.810 "dma_device_type": 1 00:16:13.810 }, 00:16:13.810 { 00:16:13.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.810 "dma_device_type": 2 00:16:13.810 } 00:16:13.810 ], 00:16:13.810 "driver_specific": {} 00:16:13.810 } 00:16:13.810 ] 00:16:13.810 04:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:13.810 04:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:13.810 04:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:13.810 04:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:13.810 04:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:13.810 04:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:13.810 04:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:13.810 04:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:13.810 04:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:13.810 04:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:13.810 04:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:13.810 04:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:13.810 04:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:13.810 04:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.810 04:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.069 04:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:14.069 "name": "Existed_Raid", 00:16:14.069 "uuid": "1d539d05-eb10-4a1d-b4c2-50e3c479fbf1", 00:16:14.069 "strip_size_kb": 0, 00:16:14.069 "state": "online", 00:16:14.069 "raid_level": "raid1", 00:16:14.069 "superblock": true, 00:16:14.069 "num_base_bdevs": 2, 00:16:14.069 "num_base_bdevs_discovered": 2, 00:16:14.069 "num_base_bdevs_operational": 2, 00:16:14.069 "base_bdevs_list": [ 00:16:14.069 { 00:16:14.069 "name": "BaseBdev1", 00:16:14.069 "uuid": "04a9b9bd-0463-420d-991f-a7cb34a1e799", 00:16:14.069 "is_configured": true, 00:16:14.069 "data_offset": 2048, 00:16:14.069 "data_size": 63488 00:16:14.069 }, 00:16:14.069 { 00:16:14.069 "name": "BaseBdev2", 00:16:14.069 "uuid": "f2cd953e-107f-4e7d-87e0-a4d6fdb34007", 00:16:14.069 "is_configured": true, 00:16:14.069 "data_offset": 2048, 00:16:14.069 "data_size": 63488 00:16:14.069 } 00:16:14.069 ] 00:16:14.069 }' 00:16:14.069 04:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:14.069 04:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.638 04:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:14.638 04:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:14.638 04:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:14.638 04:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:14.638 04:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:14.638 04:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:14.638 04:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:14.638 04:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:14.897 [2024-07-23 04:11:23.584658] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:14.897 04:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:14.897 "name": "Existed_Raid", 00:16:14.897 "aliases": [ 00:16:14.897 "1d539d05-eb10-4a1d-b4c2-50e3c479fbf1" 00:16:14.897 ], 00:16:14.897 "product_name": "Raid Volume", 00:16:14.897 "block_size": 512, 00:16:14.897 "num_blocks": 63488, 00:16:14.897 "uuid": "1d539d05-eb10-4a1d-b4c2-50e3c479fbf1", 00:16:14.897 "assigned_rate_limits": { 00:16:14.897 "rw_ios_per_sec": 0, 00:16:14.897 "rw_mbytes_per_sec": 0, 00:16:14.897 "r_mbytes_per_sec": 0, 00:16:14.897 "w_mbytes_per_sec": 0 00:16:14.897 }, 00:16:14.897 "claimed": false, 00:16:14.897 "zoned": false, 00:16:14.897 "supported_io_types": { 00:16:14.897 "read": true, 00:16:14.897 "write": true, 00:16:14.897 "unmap": false, 00:16:14.897 "flush": false, 00:16:14.897 "reset": true, 00:16:14.897 "nvme_admin": false, 00:16:14.897 "nvme_io": false, 00:16:14.897 "nvme_io_md": false, 00:16:14.897 "write_zeroes": true, 00:16:14.897 "zcopy": false, 00:16:14.897 "get_zone_info": false, 00:16:14.897 "zone_management": false, 00:16:14.897 "zone_append": false, 00:16:14.897 "compare": false, 00:16:14.897 "compare_and_write": false, 00:16:14.897 "abort": false, 00:16:14.897 "seek_hole": false, 00:16:14.897 "seek_data": false, 00:16:14.897 "copy": false, 00:16:14.897 "nvme_iov_md": false 00:16:14.897 }, 00:16:14.897 "memory_domains": [ 00:16:14.897 { 00:16:14.897 "dma_device_id": "system", 00:16:14.897 "dma_device_type": 1 00:16:14.897 }, 00:16:14.897 { 00:16:14.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.897 "dma_device_type": 2 00:16:14.897 }, 00:16:14.897 { 00:16:14.897 "dma_device_id": "system", 00:16:14.897 "dma_device_type": 1 00:16:14.897 }, 00:16:14.897 { 00:16:14.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.897 "dma_device_type": 2 00:16:14.897 } 00:16:14.897 ], 00:16:14.897 "driver_specific": { 00:16:14.897 "raid": { 00:16:14.897 "uuid": "1d539d05-eb10-4a1d-b4c2-50e3c479fbf1", 00:16:14.897 "strip_size_kb": 0, 00:16:14.897 "state": "online", 00:16:14.897 "raid_level": "raid1", 00:16:14.897 "superblock": true, 00:16:14.897 "num_base_bdevs": 2, 00:16:14.897 "num_base_bdevs_discovered": 2, 00:16:14.897 "num_base_bdevs_operational": 2, 00:16:14.897 "base_bdevs_list": [ 00:16:14.897 { 00:16:14.897 "name": "BaseBdev1", 00:16:14.897 "uuid": "04a9b9bd-0463-420d-991f-a7cb34a1e799", 00:16:14.897 "is_configured": true, 00:16:14.897 "data_offset": 2048, 00:16:14.897 "data_size": 63488 00:16:14.897 }, 00:16:14.897 { 00:16:14.897 "name": "BaseBdev2", 00:16:14.897 "uuid": "f2cd953e-107f-4e7d-87e0-a4d6fdb34007", 00:16:14.897 "is_configured": true, 00:16:14.897 "data_offset": 2048, 00:16:14.897 "data_size": 63488 00:16:14.897 } 00:16:14.897 ] 00:16:14.897 } 00:16:14.897 } 00:16:14.897 }' 00:16:14.897 04:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:14.897 04:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:14.897 BaseBdev2' 00:16:14.897 04:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:14.897 04:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:14.897 04:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:15.157 04:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:15.157 "name": "BaseBdev1", 00:16:15.157 "aliases": [ 00:16:15.157 "04a9b9bd-0463-420d-991f-a7cb34a1e799" 00:16:15.157 ], 00:16:15.157 "product_name": "Malloc disk", 00:16:15.157 "block_size": 512, 00:16:15.157 "num_blocks": 65536, 00:16:15.157 "uuid": "04a9b9bd-0463-420d-991f-a7cb34a1e799", 00:16:15.157 "assigned_rate_limits": { 00:16:15.157 "rw_ios_per_sec": 0, 00:16:15.157 "rw_mbytes_per_sec": 0, 00:16:15.157 "r_mbytes_per_sec": 0, 00:16:15.157 "w_mbytes_per_sec": 0 00:16:15.157 }, 00:16:15.157 "claimed": true, 00:16:15.157 "claim_type": "exclusive_write", 00:16:15.157 "zoned": false, 00:16:15.157 "supported_io_types": { 00:16:15.157 "read": true, 00:16:15.157 "write": true, 00:16:15.157 "unmap": true, 00:16:15.157 "flush": true, 00:16:15.157 "reset": true, 00:16:15.157 "nvme_admin": false, 00:16:15.157 "nvme_io": false, 00:16:15.157 "nvme_io_md": false, 00:16:15.157 "write_zeroes": true, 00:16:15.157 "zcopy": true, 00:16:15.157 "get_zone_info": false, 00:16:15.157 "zone_management": false, 00:16:15.157 "zone_append": false, 00:16:15.157 "compare": false, 00:16:15.157 "compare_and_write": false, 00:16:15.157 "abort": true, 00:16:15.157 "seek_hole": false, 00:16:15.157 "seek_data": false, 00:16:15.157 "copy": true, 00:16:15.157 "nvme_iov_md": false 00:16:15.157 }, 00:16:15.157 "memory_domains": [ 00:16:15.157 { 00:16:15.157 "dma_device_id": "system", 00:16:15.157 "dma_device_type": 1 00:16:15.157 }, 00:16:15.157 { 00:16:15.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.157 "dma_device_type": 2 00:16:15.157 } 00:16:15.157 ], 00:16:15.157 "driver_specific": {} 00:16:15.157 }' 00:16:15.157 04:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:15.157 04:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:15.416 04:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:15.416 04:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.416 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.416 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:15.416 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.416 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.416 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:15.416 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:15.416 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:15.676 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:15.676 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:15.676 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:15.676 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:15.676 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:15.676 "name": "BaseBdev2", 00:16:15.676 "aliases": [ 00:16:15.676 "f2cd953e-107f-4e7d-87e0-a4d6fdb34007" 00:16:15.676 ], 00:16:15.676 "product_name": "Malloc disk", 00:16:15.676 "block_size": 512, 00:16:15.676 "num_blocks": 65536, 00:16:15.676 "uuid": "f2cd953e-107f-4e7d-87e0-a4d6fdb34007", 00:16:15.676 "assigned_rate_limits": { 00:16:15.676 "rw_ios_per_sec": 0, 00:16:15.676 "rw_mbytes_per_sec": 0, 00:16:15.676 "r_mbytes_per_sec": 0, 00:16:15.676 "w_mbytes_per_sec": 0 00:16:15.676 }, 00:16:15.676 "claimed": true, 00:16:15.676 "claim_type": "exclusive_write", 00:16:15.676 "zoned": false, 00:16:15.676 "supported_io_types": { 00:16:15.676 "read": true, 00:16:15.676 "write": true, 00:16:15.676 "unmap": true, 00:16:15.676 "flush": true, 00:16:15.676 "reset": true, 00:16:15.676 "nvme_admin": false, 00:16:15.676 "nvme_io": false, 00:16:15.676 "nvme_io_md": false, 00:16:15.676 "write_zeroes": true, 00:16:15.676 "zcopy": true, 00:16:15.676 "get_zone_info": false, 00:16:15.676 "zone_management": false, 00:16:15.676 "zone_append": false, 00:16:15.676 "compare": false, 00:16:15.676 "compare_and_write": false, 00:16:15.676 "abort": true, 00:16:15.676 "seek_hole": false, 00:16:15.676 "seek_data": false, 00:16:15.676 "copy": true, 00:16:15.676 "nvme_iov_md": false 00:16:15.676 }, 00:16:15.676 "memory_domains": [ 00:16:15.676 { 00:16:15.676 "dma_device_id": "system", 00:16:15.676 "dma_device_type": 1 00:16:15.676 }, 00:16:15.676 { 00:16:15.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.676 "dma_device_type": 2 00:16:15.676 } 00:16:15.676 ], 00:16:15.676 "driver_specific": {} 00:16:15.676 }' 00:16:15.676 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:15.935 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:15.935 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:15.935 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.935 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.935 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:15.935 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.935 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.935 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:15.935 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:16.195 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:16.195 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:16.195 04:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:16.454 [2024-07-23 04:11:24.988184] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:16.454 04:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:16.454 04:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:16.454 04:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:16.454 04:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:16:16.454 04:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:16.454 04:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:16.454 04:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:16.454 04:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:16.454 04:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:16.454 04:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:16.454 04:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:16.454 04:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:16.454 04:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:16.454 04:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:16.454 04:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:16.454 04:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.454 04:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.714 04:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:16.714 "name": "Existed_Raid", 00:16:16.714 "uuid": "1d539d05-eb10-4a1d-b4c2-50e3c479fbf1", 00:16:16.714 "strip_size_kb": 0, 00:16:16.714 "state": "online", 00:16:16.714 "raid_level": "raid1", 00:16:16.714 "superblock": true, 00:16:16.714 "num_base_bdevs": 2, 00:16:16.714 "num_base_bdevs_discovered": 1, 00:16:16.714 "num_base_bdevs_operational": 1, 00:16:16.714 "base_bdevs_list": [ 00:16:16.714 { 00:16:16.714 "name": null, 00:16:16.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.714 "is_configured": false, 00:16:16.714 "data_offset": 2048, 00:16:16.714 "data_size": 63488 00:16:16.714 }, 00:16:16.714 { 00:16:16.714 "name": "BaseBdev2", 00:16:16.714 "uuid": "f2cd953e-107f-4e7d-87e0-a4d6fdb34007", 00:16:16.714 "is_configured": true, 00:16:16.714 "data_offset": 2048, 00:16:16.714 "data_size": 63488 00:16:16.714 } 00:16:16.714 ] 00:16:16.714 }' 00:16:16.714 04:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:16.714 04:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.282 04:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:17.282 04:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:17.282 04:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.282 04:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:17.541 04:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:17.541 04:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:17.541 04:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:17.541 [2024-07-23 04:11:26.280035] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:17.541 [2024-07-23 04:11:26.280175] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:17.800 [2024-07-23 04:11:26.404398] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:17.800 [2024-07-23 04:11:26.404460] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:17.800 [2024-07-23 04:11:26.404480] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:16:17.800 04:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:17.800 04:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:17.800 04:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.800 04:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:18.059 04:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:18.060 04:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:18.060 04:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:18.060 04:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2635680 00:16:18.060 04:11:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2635680 ']' 00:16:18.060 04:11:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2635680 00:16:18.060 04:11:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:16:18.060 04:11:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:18.060 04:11:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2635680 00:16:18.060 04:11:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:18.060 04:11:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:18.060 04:11:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2635680' 00:16:18.060 killing process with pid 2635680 00:16:18.060 04:11:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2635680 00:16:18.060 [2024-07-23 04:11:26.707168] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:18.060 04:11:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2635680 00:16:18.060 [2024-07-23 04:11:26.729683] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:19.966 04:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:19.966 00:16:19.966 real 0m12.222s 00:16:19.966 user 0m20.005s 00:16:19.966 sys 0m2.159s 00:16:19.966 04:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:19.966 04:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.966 ************************************ 00:16:19.966 END TEST raid_state_function_test_sb 00:16:19.966 ************************************ 00:16:19.966 04:11:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:19.966 04:11:28 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:16:19.966 04:11:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:19.966 04:11:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:19.966 04:11:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:19.966 ************************************ 00:16:19.966 START TEST raid_superblock_test 00:16:19.966 ************************************ 00:16:19.966 04:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:16:19.966 04:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:16:19.966 04:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:16:19.966 04:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:19.966 04:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:19.966 04:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:19.966 04:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:19.966 04:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:19.966 04:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:19.966 04:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:19.967 04:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:19.967 04:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:19.967 04:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:19.967 04:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:19.967 04:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:16:19.967 04:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:16:19.967 04:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2637923 00:16:19.967 04:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2637923 /var/tmp/spdk-raid.sock 00:16:19.967 04:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:19.967 04:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2637923 ']' 00:16:19.967 04:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:19.967 04:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:19.967 04:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:19.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:19.967 04:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:19.967 04:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.967 [2024-07-23 04:11:28.557624] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:19.967 [2024-07-23 04:11:28.557743] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2637923 ] 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:19.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:19.967 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:20.226 [2024-07-23 04:11:28.785386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.486 [2024-07-23 04:11:29.046084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.745 [2024-07-23 04:11:29.375416] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.745 [2024-07-23 04:11:29.375448] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.005 04:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:21.005 04:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:16:21.005 04:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:21.005 04:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:21.005 04:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:21.005 04:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:21.005 04:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:21.005 04:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:21.005 04:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:21.005 04:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:21.005 04:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:21.264 malloc1 00:16:21.264 04:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:21.523 [2024-07-23 04:11:30.057417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:21.523 [2024-07-23 04:11:30.057482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.523 [2024-07-23 04:11:30.057512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:16:21.523 [2024-07-23 04:11:30.057529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.523 [2024-07-23 04:11:30.060310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.523 [2024-07-23 04:11:30.060345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:21.523 pt1 00:16:21.523 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:21.523 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:21.523 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:21.523 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:21.523 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:21.523 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:21.523 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:21.523 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:21.523 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:21.782 malloc2 00:16:21.783 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:22.042 [2024-07-23 04:11:30.572122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:22.042 [2024-07-23 04:11:30.572188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.042 [2024-07-23 04:11:30.572215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:16:22.042 [2024-07-23 04:11:30.572230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.042 [2024-07-23 04:11:30.575003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.042 [2024-07-23 04:11:30.575043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:22.042 pt2 00:16:22.042 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:22.042 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:22.042 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:16:22.042 [2024-07-23 04:11:30.800762] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:22.042 [2024-07-23 04:11:30.803103] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:22.042 [2024-07-23 04:11:30.803345] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040880 00:16:22.042 [2024-07-23 04:11:30.803369] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:22.042 [2024-07-23 04:11:30.803707] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:16:22.042 [2024-07-23 04:11:30.803957] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040880 00:16:22.042 [2024-07-23 04:11:30.803976] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040880 00:16:22.042 [2024-07-23 04:11:30.804201] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.042 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:22.042 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:22.042 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:22.042 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:22.042 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:22.042 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:22.042 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:22.042 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:22.042 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:22.042 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:22.042 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.042 04:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.302 04:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:22.302 "name": "raid_bdev1", 00:16:22.302 "uuid": "ecea8324-040b-4ad0-9951-e25cfe9f9967", 00:16:22.302 "strip_size_kb": 0, 00:16:22.302 "state": "online", 00:16:22.302 "raid_level": "raid1", 00:16:22.302 "superblock": true, 00:16:22.302 "num_base_bdevs": 2, 00:16:22.302 "num_base_bdevs_discovered": 2, 00:16:22.302 "num_base_bdevs_operational": 2, 00:16:22.302 "base_bdevs_list": [ 00:16:22.302 { 00:16:22.302 "name": "pt1", 00:16:22.302 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:22.302 "is_configured": true, 00:16:22.302 "data_offset": 2048, 00:16:22.302 "data_size": 63488 00:16:22.302 }, 00:16:22.302 { 00:16:22.302 "name": "pt2", 00:16:22.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.302 "is_configured": true, 00:16:22.302 "data_offset": 2048, 00:16:22.302 "data_size": 63488 00:16:22.302 } 00:16:22.302 ] 00:16:22.302 }' 00:16:22.302 04:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:22.302 04:11:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.870 04:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:16:22.870 04:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:22.870 04:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:22.870 04:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:22.870 04:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:22.870 04:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:22.870 04:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:22.870 04:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:23.129 [2024-07-23 04:11:31.839856] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:23.129 04:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:23.129 "name": "raid_bdev1", 00:16:23.129 "aliases": [ 00:16:23.129 "ecea8324-040b-4ad0-9951-e25cfe9f9967" 00:16:23.129 ], 00:16:23.129 "product_name": "Raid Volume", 00:16:23.129 "block_size": 512, 00:16:23.129 "num_blocks": 63488, 00:16:23.129 "uuid": "ecea8324-040b-4ad0-9951-e25cfe9f9967", 00:16:23.129 "assigned_rate_limits": { 00:16:23.129 "rw_ios_per_sec": 0, 00:16:23.129 "rw_mbytes_per_sec": 0, 00:16:23.129 "r_mbytes_per_sec": 0, 00:16:23.129 "w_mbytes_per_sec": 0 00:16:23.129 }, 00:16:23.129 "claimed": false, 00:16:23.129 "zoned": false, 00:16:23.129 "supported_io_types": { 00:16:23.129 "read": true, 00:16:23.129 "write": true, 00:16:23.129 "unmap": false, 00:16:23.129 "flush": false, 00:16:23.129 "reset": true, 00:16:23.129 "nvme_admin": false, 00:16:23.129 "nvme_io": false, 00:16:23.129 "nvme_io_md": false, 00:16:23.129 "write_zeroes": true, 00:16:23.129 "zcopy": false, 00:16:23.129 "get_zone_info": false, 00:16:23.129 "zone_management": false, 00:16:23.129 "zone_append": false, 00:16:23.129 "compare": false, 00:16:23.129 "compare_and_write": false, 00:16:23.129 "abort": false, 00:16:23.129 "seek_hole": false, 00:16:23.129 "seek_data": false, 00:16:23.129 "copy": false, 00:16:23.129 "nvme_iov_md": false 00:16:23.129 }, 00:16:23.129 "memory_domains": [ 00:16:23.129 { 00:16:23.129 "dma_device_id": "system", 00:16:23.129 "dma_device_type": 1 00:16:23.129 }, 00:16:23.129 { 00:16:23.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.129 "dma_device_type": 2 00:16:23.129 }, 00:16:23.129 { 00:16:23.129 "dma_device_id": "system", 00:16:23.129 "dma_device_type": 1 00:16:23.129 }, 00:16:23.129 { 00:16:23.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.129 "dma_device_type": 2 00:16:23.129 } 00:16:23.129 ], 00:16:23.129 "driver_specific": { 00:16:23.129 "raid": { 00:16:23.129 "uuid": "ecea8324-040b-4ad0-9951-e25cfe9f9967", 00:16:23.129 "strip_size_kb": 0, 00:16:23.129 "state": "online", 00:16:23.129 "raid_level": "raid1", 00:16:23.129 "superblock": true, 00:16:23.129 "num_base_bdevs": 2, 00:16:23.129 "num_base_bdevs_discovered": 2, 00:16:23.129 "num_base_bdevs_operational": 2, 00:16:23.129 "base_bdevs_list": [ 00:16:23.129 { 00:16:23.129 "name": "pt1", 00:16:23.129 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:23.129 "is_configured": true, 00:16:23.129 "data_offset": 2048, 00:16:23.129 "data_size": 63488 00:16:23.129 }, 00:16:23.129 { 00:16:23.129 "name": "pt2", 00:16:23.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:23.129 "is_configured": true, 00:16:23.129 "data_offset": 2048, 00:16:23.129 "data_size": 63488 00:16:23.129 } 00:16:23.129 ] 00:16:23.129 } 00:16:23.129 } 00:16:23.129 }' 00:16:23.129 04:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:23.129 04:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:23.129 pt2' 00:16:23.129 04:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:23.129 04:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:23.129 04:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:23.389 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:23.389 "name": "pt1", 00:16:23.389 "aliases": [ 00:16:23.389 "00000000-0000-0000-0000-000000000001" 00:16:23.389 ], 00:16:23.389 "product_name": "passthru", 00:16:23.389 "block_size": 512, 00:16:23.389 "num_blocks": 65536, 00:16:23.389 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:23.389 "assigned_rate_limits": { 00:16:23.389 "rw_ios_per_sec": 0, 00:16:23.389 "rw_mbytes_per_sec": 0, 00:16:23.389 "r_mbytes_per_sec": 0, 00:16:23.389 "w_mbytes_per_sec": 0 00:16:23.389 }, 00:16:23.389 "claimed": true, 00:16:23.389 "claim_type": "exclusive_write", 00:16:23.389 "zoned": false, 00:16:23.389 "supported_io_types": { 00:16:23.389 "read": true, 00:16:23.389 "write": true, 00:16:23.389 "unmap": true, 00:16:23.389 "flush": true, 00:16:23.389 "reset": true, 00:16:23.389 "nvme_admin": false, 00:16:23.389 "nvme_io": false, 00:16:23.389 "nvme_io_md": false, 00:16:23.389 "write_zeroes": true, 00:16:23.389 "zcopy": true, 00:16:23.389 "get_zone_info": false, 00:16:23.389 "zone_management": false, 00:16:23.389 "zone_append": false, 00:16:23.389 "compare": false, 00:16:23.389 "compare_and_write": false, 00:16:23.389 "abort": true, 00:16:23.389 "seek_hole": false, 00:16:23.389 "seek_data": false, 00:16:23.389 "copy": true, 00:16:23.389 "nvme_iov_md": false 00:16:23.389 }, 00:16:23.389 "memory_domains": [ 00:16:23.389 { 00:16:23.389 "dma_device_id": "system", 00:16:23.389 "dma_device_type": 1 00:16:23.389 }, 00:16:23.389 { 00:16:23.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.389 "dma_device_type": 2 00:16:23.389 } 00:16:23.389 ], 00:16:23.389 "driver_specific": { 00:16:23.389 "passthru": { 00:16:23.389 "name": "pt1", 00:16:23.389 "base_bdev_name": "malloc1" 00:16:23.389 } 00:16:23.389 } 00:16:23.389 }' 00:16:23.389 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.389 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.648 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:23.648 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:23.648 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:23.648 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:23.648 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.648 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.648 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:23.648 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.648 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.908 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:23.908 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:23.908 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:23.908 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:23.908 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:23.908 "name": "pt2", 00:16:23.908 "aliases": [ 00:16:23.908 "00000000-0000-0000-0000-000000000002" 00:16:23.908 ], 00:16:23.908 "product_name": "passthru", 00:16:23.908 "block_size": 512, 00:16:23.908 "num_blocks": 65536, 00:16:23.908 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:23.908 "assigned_rate_limits": { 00:16:23.908 "rw_ios_per_sec": 0, 00:16:23.908 "rw_mbytes_per_sec": 0, 00:16:23.908 "r_mbytes_per_sec": 0, 00:16:23.908 "w_mbytes_per_sec": 0 00:16:23.908 }, 00:16:23.908 "claimed": true, 00:16:23.908 "claim_type": "exclusive_write", 00:16:23.908 "zoned": false, 00:16:23.908 "supported_io_types": { 00:16:23.908 "read": true, 00:16:23.908 "write": true, 00:16:23.908 "unmap": true, 00:16:23.908 "flush": true, 00:16:23.908 "reset": true, 00:16:23.908 "nvme_admin": false, 00:16:23.908 "nvme_io": false, 00:16:23.908 "nvme_io_md": false, 00:16:23.908 "write_zeroes": true, 00:16:23.908 "zcopy": true, 00:16:23.908 "get_zone_info": false, 00:16:23.908 "zone_management": false, 00:16:23.908 "zone_append": false, 00:16:23.908 "compare": false, 00:16:23.908 "compare_and_write": false, 00:16:23.908 "abort": true, 00:16:23.908 "seek_hole": false, 00:16:23.908 "seek_data": false, 00:16:23.908 "copy": true, 00:16:23.908 "nvme_iov_md": false 00:16:23.908 }, 00:16:23.908 "memory_domains": [ 00:16:23.908 { 00:16:23.908 "dma_device_id": "system", 00:16:23.908 "dma_device_type": 1 00:16:23.908 }, 00:16:23.908 { 00:16:23.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.908 "dma_device_type": 2 00:16:23.908 } 00:16:23.908 ], 00:16:23.908 "driver_specific": { 00:16:23.908 "passthru": { 00:16:23.908 "name": "pt2", 00:16:23.908 "base_bdev_name": "malloc2" 00:16:23.908 } 00:16:23.908 } 00:16:23.908 }' 00:16:23.908 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:24.167 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:24.167 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:24.167 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.167 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.167 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:24.167 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:24.167 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:24.167 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:24.167 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:24.448 04:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:24.448 04:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:24.448 04:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:24.448 04:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:16:24.716 [2024-07-23 04:11:33.227621] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:24.716 04:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=ecea8324-040b-4ad0-9951-e25cfe9f9967 00:16:24.716 04:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z ecea8324-040b-4ad0-9951-e25cfe9f9967 ']' 00:16:24.716 04:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:24.716 [2024-07-23 04:11:33.451933] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:24.716 [2024-07-23 04:11:33.451965] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:24.716 [2024-07-23 04:11:33.452052] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.716 [2024-07-23 04:11:33.452123] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:24.716 [2024-07-23 04:11:33.452161] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040880 name raid_bdev1, state offline 00:16:24.716 04:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.716 04:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:16:24.974 04:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:16:24.974 04:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:16:24.974 04:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:24.974 04:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:25.237 04:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:25.237 04:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:25.496 04:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:25.496 04:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:25.755 04:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:16:25.755 04:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:25.755 04:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:16:25.755 04:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:25.755 04:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:25.755 04:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:25.755 04:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:25.755 04:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:25.755 04:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:25.755 04:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:25.755 04:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:25.755 04:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:16:25.755 04:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:26.019 [2024-07-23 04:11:34.631061] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:26.019 [2024-07-23 04:11:34.633408] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:26.019 [2024-07-23 04:11:34.633482] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:26.019 [2024-07-23 04:11:34.633538] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:26.019 [2024-07-23 04:11:34.633562] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:26.019 [2024-07-23 04:11:34.633579] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state configuring 00:16:26.019 request: 00:16:26.019 { 00:16:26.019 "name": "raid_bdev1", 00:16:26.019 "raid_level": "raid1", 00:16:26.019 "base_bdevs": [ 00:16:26.019 "malloc1", 00:16:26.019 "malloc2" 00:16:26.019 ], 00:16:26.019 "superblock": false, 00:16:26.019 "method": "bdev_raid_create", 00:16:26.019 "req_id": 1 00:16:26.019 } 00:16:26.019 Got JSON-RPC error response 00:16:26.019 response: 00:16:26.019 { 00:16:26.019 "code": -17, 00:16:26.019 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:26.019 } 00:16:26.019 04:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:16:26.019 04:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:26.019 04:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:26.019 04:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:26.019 04:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.019 04:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:16:26.279 04:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:16:26.279 04:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:16:26.279 04:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:26.539 [2024-07-23 04:11:35.096235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:26.539 [2024-07-23 04:11:35.096302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.539 [2024-07-23 04:11:35.096326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:16:26.539 [2024-07-23 04:11:35.096344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.539 [2024-07-23 04:11:35.099170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.539 [2024-07-23 04:11:35.099207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:26.539 [2024-07-23 04:11:35.099298] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:26.539 [2024-07-23 04:11:35.099384] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:26.539 pt1 00:16:26.539 04:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:26.539 04:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:26.539 04:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:26.539 04:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:26.539 04:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:26.539 04:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:26.539 04:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:26.539 04:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:26.539 04:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:26.539 04:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:26.539 04:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.539 04:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.798 04:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:26.798 "name": "raid_bdev1", 00:16:26.798 "uuid": "ecea8324-040b-4ad0-9951-e25cfe9f9967", 00:16:26.798 "strip_size_kb": 0, 00:16:26.798 "state": "configuring", 00:16:26.798 "raid_level": "raid1", 00:16:26.798 "superblock": true, 00:16:26.798 "num_base_bdevs": 2, 00:16:26.798 "num_base_bdevs_discovered": 1, 00:16:26.798 "num_base_bdevs_operational": 2, 00:16:26.798 "base_bdevs_list": [ 00:16:26.798 { 00:16:26.798 "name": "pt1", 00:16:26.798 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:26.798 "is_configured": true, 00:16:26.798 "data_offset": 2048, 00:16:26.798 "data_size": 63488 00:16:26.798 }, 00:16:26.798 { 00:16:26.798 "name": null, 00:16:26.798 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:26.798 "is_configured": false, 00:16:26.798 "data_offset": 2048, 00:16:26.798 "data_size": 63488 00:16:26.798 } 00:16:26.798 ] 00:16:26.798 }' 00:16:26.798 04:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:26.798 04:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.367 04:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:16:27.367 04:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:27.367 04:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:27.367 04:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:27.367 [2024-07-23 04:11:36.139207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:27.367 [2024-07-23 04:11:36.139278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.367 [2024-07-23 04:11:36.139303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041d80 00:16:27.367 [2024-07-23 04:11:36.139322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.367 [2024-07-23 04:11:36.139908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.367 [2024-07-23 04:11:36.139936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:27.367 [2024-07-23 04:11:36.140033] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:27.367 [2024-07-23 04:11:36.140071] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:27.367 [2024-07-23 04:11:36.140261] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:16:27.367 [2024-07-23 04:11:36.140285] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:27.367 [2024-07-23 04:11:36.140580] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:16:27.367 [2024-07-23 04:11:36.140802] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:16:27.367 [2024-07-23 04:11:36.140816] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:16:27.367 [2024-07-23 04:11:36.141005] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.367 pt2 00:16:27.627 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:27.627 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:27.627 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:27.627 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:27.627 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:27.627 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:27.627 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:27.627 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:27.627 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.627 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.627 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.627 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.627 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.627 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.627 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:27.627 "name": "raid_bdev1", 00:16:27.627 "uuid": "ecea8324-040b-4ad0-9951-e25cfe9f9967", 00:16:27.627 "strip_size_kb": 0, 00:16:27.627 "state": "online", 00:16:27.627 "raid_level": "raid1", 00:16:27.627 "superblock": true, 00:16:27.627 "num_base_bdevs": 2, 00:16:27.627 "num_base_bdevs_discovered": 2, 00:16:27.627 "num_base_bdevs_operational": 2, 00:16:27.627 "base_bdevs_list": [ 00:16:27.627 { 00:16:27.627 "name": "pt1", 00:16:27.627 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:27.627 "is_configured": true, 00:16:27.627 "data_offset": 2048, 00:16:27.627 "data_size": 63488 00:16:27.627 }, 00:16:27.627 { 00:16:27.627 "name": "pt2", 00:16:27.627 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:27.627 "is_configured": true, 00:16:27.627 "data_offset": 2048, 00:16:27.627 "data_size": 63488 00:16:27.627 } 00:16:27.627 ] 00:16:27.627 }' 00:16:27.627 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:27.627 04:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.196 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:28.196 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:28.196 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:28.196 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:28.196 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:28.196 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:28.196 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:28.196 04:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:28.455 [2024-07-23 04:11:37.170336] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:28.455 04:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:28.455 "name": "raid_bdev1", 00:16:28.455 "aliases": [ 00:16:28.455 "ecea8324-040b-4ad0-9951-e25cfe9f9967" 00:16:28.455 ], 00:16:28.455 "product_name": "Raid Volume", 00:16:28.455 "block_size": 512, 00:16:28.455 "num_blocks": 63488, 00:16:28.455 "uuid": "ecea8324-040b-4ad0-9951-e25cfe9f9967", 00:16:28.455 "assigned_rate_limits": { 00:16:28.455 "rw_ios_per_sec": 0, 00:16:28.455 "rw_mbytes_per_sec": 0, 00:16:28.455 "r_mbytes_per_sec": 0, 00:16:28.455 "w_mbytes_per_sec": 0 00:16:28.455 }, 00:16:28.455 "claimed": false, 00:16:28.455 "zoned": false, 00:16:28.455 "supported_io_types": { 00:16:28.455 "read": true, 00:16:28.455 "write": true, 00:16:28.455 "unmap": false, 00:16:28.455 "flush": false, 00:16:28.455 "reset": true, 00:16:28.456 "nvme_admin": false, 00:16:28.456 "nvme_io": false, 00:16:28.456 "nvme_io_md": false, 00:16:28.456 "write_zeroes": true, 00:16:28.456 "zcopy": false, 00:16:28.456 "get_zone_info": false, 00:16:28.456 "zone_management": false, 00:16:28.456 "zone_append": false, 00:16:28.456 "compare": false, 00:16:28.456 "compare_and_write": false, 00:16:28.456 "abort": false, 00:16:28.456 "seek_hole": false, 00:16:28.456 "seek_data": false, 00:16:28.456 "copy": false, 00:16:28.456 "nvme_iov_md": false 00:16:28.456 }, 00:16:28.456 "memory_domains": [ 00:16:28.456 { 00:16:28.456 "dma_device_id": "system", 00:16:28.456 "dma_device_type": 1 00:16:28.456 }, 00:16:28.456 { 00:16:28.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.456 "dma_device_type": 2 00:16:28.456 }, 00:16:28.456 { 00:16:28.456 "dma_device_id": "system", 00:16:28.456 "dma_device_type": 1 00:16:28.456 }, 00:16:28.456 { 00:16:28.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.456 "dma_device_type": 2 00:16:28.456 } 00:16:28.456 ], 00:16:28.456 "driver_specific": { 00:16:28.456 "raid": { 00:16:28.456 "uuid": "ecea8324-040b-4ad0-9951-e25cfe9f9967", 00:16:28.456 "strip_size_kb": 0, 00:16:28.456 "state": "online", 00:16:28.456 "raid_level": "raid1", 00:16:28.456 "superblock": true, 00:16:28.456 "num_base_bdevs": 2, 00:16:28.456 "num_base_bdevs_discovered": 2, 00:16:28.456 "num_base_bdevs_operational": 2, 00:16:28.456 "base_bdevs_list": [ 00:16:28.456 { 00:16:28.456 "name": "pt1", 00:16:28.456 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:28.456 "is_configured": true, 00:16:28.456 "data_offset": 2048, 00:16:28.456 "data_size": 63488 00:16:28.456 }, 00:16:28.456 { 00:16:28.456 "name": "pt2", 00:16:28.456 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.456 "is_configured": true, 00:16:28.456 "data_offset": 2048, 00:16:28.456 "data_size": 63488 00:16:28.456 } 00:16:28.456 ] 00:16:28.456 } 00:16:28.456 } 00:16:28.456 }' 00:16:28.456 04:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:28.456 04:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:28.456 pt2' 00:16:28.456 04:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:28.715 04:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:28.715 04:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:28.715 04:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:28.715 "name": "pt1", 00:16:28.715 "aliases": [ 00:16:28.715 "00000000-0000-0000-0000-000000000001" 00:16:28.715 ], 00:16:28.715 "product_name": "passthru", 00:16:28.715 "block_size": 512, 00:16:28.715 "num_blocks": 65536, 00:16:28.715 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:28.715 "assigned_rate_limits": { 00:16:28.715 "rw_ios_per_sec": 0, 00:16:28.715 "rw_mbytes_per_sec": 0, 00:16:28.715 "r_mbytes_per_sec": 0, 00:16:28.716 "w_mbytes_per_sec": 0 00:16:28.716 }, 00:16:28.716 "claimed": true, 00:16:28.716 "claim_type": "exclusive_write", 00:16:28.716 "zoned": false, 00:16:28.716 "supported_io_types": { 00:16:28.716 "read": true, 00:16:28.716 "write": true, 00:16:28.716 "unmap": true, 00:16:28.716 "flush": true, 00:16:28.716 "reset": true, 00:16:28.716 "nvme_admin": false, 00:16:28.716 "nvme_io": false, 00:16:28.716 "nvme_io_md": false, 00:16:28.716 "write_zeroes": true, 00:16:28.716 "zcopy": true, 00:16:28.716 "get_zone_info": false, 00:16:28.716 "zone_management": false, 00:16:28.716 "zone_append": false, 00:16:28.716 "compare": false, 00:16:28.716 "compare_and_write": false, 00:16:28.716 "abort": true, 00:16:28.716 "seek_hole": false, 00:16:28.716 "seek_data": false, 00:16:28.716 "copy": true, 00:16:28.716 "nvme_iov_md": false 00:16:28.716 }, 00:16:28.716 "memory_domains": [ 00:16:28.716 { 00:16:28.716 "dma_device_id": "system", 00:16:28.716 "dma_device_type": 1 00:16:28.716 }, 00:16:28.716 { 00:16:28.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.716 "dma_device_type": 2 00:16:28.716 } 00:16:28.716 ], 00:16:28.716 "driver_specific": { 00:16:28.716 "passthru": { 00:16:28.716 "name": "pt1", 00:16:28.716 "base_bdev_name": "malloc1" 00:16:28.716 } 00:16:28.716 } 00:16:28.716 }' 00:16:28.716 04:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:28.975 04:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:28.975 04:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:28.975 04:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:28.975 04:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:28.975 04:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:28.975 04:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:28.975 04:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:28.975 04:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:28.975 04:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.234 04:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.234 04:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:29.234 04:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:29.234 04:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:29.234 04:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:29.495 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:29.495 "name": "pt2", 00:16:29.495 "aliases": [ 00:16:29.495 "00000000-0000-0000-0000-000000000002" 00:16:29.495 ], 00:16:29.495 "product_name": "passthru", 00:16:29.495 "block_size": 512, 00:16:29.495 "num_blocks": 65536, 00:16:29.495 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:29.495 "assigned_rate_limits": { 00:16:29.495 "rw_ios_per_sec": 0, 00:16:29.495 "rw_mbytes_per_sec": 0, 00:16:29.495 "r_mbytes_per_sec": 0, 00:16:29.495 "w_mbytes_per_sec": 0 00:16:29.495 }, 00:16:29.495 "claimed": true, 00:16:29.495 "claim_type": "exclusive_write", 00:16:29.495 "zoned": false, 00:16:29.495 "supported_io_types": { 00:16:29.495 "read": true, 00:16:29.495 "write": true, 00:16:29.495 "unmap": true, 00:16:29.495 "flush": true, 00:16:29.495 "reset": true, 00:16:29.495 "nvme_admin": false, 00:16:29.495 "nvme_io": false, 00:16:29.495 "nvme_io_md": false, 00:16:29.495 "write_zeroes": true, 00:16:29.495 "zcopy": true, 00:16:29.495 "get_zone_info": false, 00:16:29.495 "zone_management": false, 00:16:29.495 "zone_append": false, 00:16:29.495 "compare": false, 00:16:29.495 "compare_and_write": false, 00:16:29.495 "abort": true, 00:16:29.495 "seek_hole": false, 00:16:29.495 "seek_data": false, 00:16:29.495 "copy": true, 00:16:29.495 "nvme_iov_md": false 00:16:29.495 }, 00:16:29.495 "memory_domains": [ 00:16:29.495 { 00:16:29.495 "dma_device_id": "system", 00:16:29.495 "dma_device_type": 1 00:16:29.495 }, 00:16:29.495 { 00:16:29.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.495 "dma_device_type": 2 00:16:29.495 } 00:16:29.495 ], 00:16:29.495 "driver_specific": { 00:16:29.495 "passthru": { 00:16:29.495 "name": "pt2", 00:16:29.495 "base_bdev_name": "malloc2" 00:16:29.495 } 00:16:29.495 } 00:16:29.495 }' 00:16:29.495 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.495 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.495 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:29.495 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.495 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.495 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:29.495 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.495 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.754 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:29.754 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.754 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.754 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:29.754 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:29.754 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:16:30.014 [2024-07-23 04:11:38.586114] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.014 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' ecea8324-040b-4ad0-9951-e25cfe9f9967 '!=' ecea8324-040b-4ad0-9951-e25cfe9f9967 ']' 00:16:30.014 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:16:30.014 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:30.014 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:30.014 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:30.273 [2024-07-23 04:11:38.814428] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:30.273 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:30.273 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:30.273 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:30.273 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:30.273 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:30.273 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:30.273 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:30.273 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:30.273 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:30.273 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:30.273 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.273 04:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.532 04:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:30.532 "name": "raid_bdev1", 00:16:30.532 "uuid": "ecea8324-040b-4ad0-9951-e25cfe9f9967", 00:16:30.532 "strip_size_kb": 0, 00:16:30.532 "state": "online", 00:16:30.532 "raid_level": "raid1", 00:16:30.532 "superblock": true, 00:16:30.532 "num_base_bdevs": 2, 00:16:30.532 "num_base_bdevs_discovered": 1, 00:16:30.532 "num_base_bdevs_operational": 1, 00:16:30.532 "base_bdevs_list": [ 00:16:30.532 { 00:16:30.532 "name": null, 00:16:30.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.532 "is_configured": false, 00:16:30.532 "data_offset": 2048, 00:16:30.532 "data_size": 63488 00:16:30.532 }, 00:16:30.532 { 00:16:30.532 "name": "pt2", 00:16:30.532 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.532 "is_configured": true, 00:16:30.532 "data_offset": 2048, 00:16:30.532 "data_size": 63488 00:16:30.532 } 00:16:30.532 ] 00:16:30.532 }' 00:16:30.532 04:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:30.532 04:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.101 04:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:31.101 [2024-07-23 04:11:39.845229] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:31.101 [2024-07-23 04:11:39.845260] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:31.101 [2024-07-23 04:11:39.845342] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:31.101 [2024-07-23 04:11:39.845399] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:31.101 [2024-07-23 04:11:39.845418] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:16:31.101 04:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.101 04:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:16:31.359 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:16:31.359 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:16:31.359 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:16:31.359 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:31.359 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:31.623 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:31.623 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:31.623 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:16:31.623 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:31.623 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:16:31.623 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:31.882 [2024-07-23 04:11:40.527021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:31.882 [2024-07-23 04:11:40.527096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.882 [2024-07-23 04:11:40.527120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:16:31.882 [2024-07-23 04:11:40.527137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.882 [2024-07-23 04:11:40.529891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.882 [2024-07-23 04:11:40.529928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:31.882 [2024-07-23 04:11:40.530019] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:31.882 [2024-07-23 04:11:40.530085] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:31.882 [2024-07-23 04:11:40.530264] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042680 00:16:31.882 [2024-07-23 04:11:40.530282] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:31.882 [2024-07-23 04:11:40.530578] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:16:31.882 [2024-07-23 04:11:40.530796] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042680 00:16:31.882 [2024-07-23 04:11:40.530811] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042680 00:16:31.882 [2024-07-23 04:11:40.531014] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.882 pt2 00:16:31.882 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:31.882 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:31.882 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:31.882 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:31.882 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:31.882 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:31.882 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:31.882 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:31.882 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:31.882 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:31.882 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.882 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.141 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:32.141 "name": "raid_bdev1", 00:16:32.141 "uuid": "ecea8324-040b-4ad0-9951-e25cfe9f9967", 00:16:32.141 "strip_size_kb": 0, 00:16:32.141 "state": "online", 00:16:32.141 "raid_level": "raid1", 00:16:32.141 "superblock": true, 00:16:32.141 "num_base_bdevs": 2, 00:16:32.141 "num_base_bdevs_discovered": 1, 00:16:32.141 "num_base_bdevs_operational": 1, 00:16:32.141 "base_bdevs_list": [ 00:16:32.141 { 00:16:32.141 "name": null, 00:16:32.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.141 "is_configured": false, 00:16:32.141 "data_offset": 2048, 00:16:32.141 "data_size": 63488 00:16:32.141 }, 00:16:32.141 { 00:16:32.141 "name": "pt2", 00:16:32.141 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.141 "is_configured": true, 00:16:32.141 "data_offset": 2048, 00:16:32.141 "data_size": 63488 00:16:32.141 } 00:16:32.141 ] 00:16:32.141 }' 00:16:32.141 04:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:32.141 04:11:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.710 04:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:32.968 [2024-07-23 04:11:41.557843] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.968 [2024-07-23 04:11:41.557878] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.968 [2024-07-23 04:11:41.557956] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.968 [2024-07-23 04:11:41.558028] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.968 [2024-07-23 04:11:41.558044] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state offline 00:16:32.969 04:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.969 04:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:16:33.227 04:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:16:33.227 04:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:16:33.227 04:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:16:33.227 04:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:33.227 [2024-07-23 04:11:42.011054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:33.227 [2024-07-23 04:11:42.011115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.227 [2024-07-23 04:11:42.011147] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:16:33.227 [2024-07-23 04:11:42.011163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.486 [2024-07-23 04:11:42.013949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.486 [2024-07-23 04:11:42.013984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:33.486 [2024-07-23 04:11:42.014081] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:33.486 [2024-07-23 04:11:42.014186] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:33.486 [2024-07-23 04:11:42.014374] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:33.486 [2024-07-23 04:11:42.014392] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:33.486 [2024-07-23 04:11:42.014419] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042f80 name raid_bdev1, state configuring 00:16:33.486 [2024-07-23 04:11:42.014508] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:33.486 [2024-07-23 04:11:42.014599] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:16:33.486 [2024-07-23 04:11:42.014613] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:33.486 [2024-07-23 04:11:42.014919] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:16:33.486 [2024-07-23 04:11:42.015160] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:16:33.486 [2024-07-23 04:11:42.015177] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:16:33.487 [2024-07-23 04:11:42.015400] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.487 pt1 00:16:33.487 04:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:16:33.487 04:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:33.487 04:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:33.487 04:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:33.487 04:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:33.487 04:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:33.487 04:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:33.487 04:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:33.487 04:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:33.487 04:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:33.487 04:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:33.487 04:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.487 04:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.487 04:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:33.487 "name": "raid_bdev1", 00:16:33.487 "uuid": "ecea8324-040b-4ad0-9951-e25cfe9f9967", 00:16:33.487 "strip_size_kb": 0, 00:16:33.487 "state": "online", 00:16:33.487 "raid_level": "raid1", 00:16:33.487 "superblock": true, 00:16:33.487 "num_base_bdevs": 2, 00:16:33.487 "num_base_bdevs_discovered": 1, 00:16:33.487 "num_base_bdevs_operational": 1, 00:16:33.487 "base_bdevs_list": [ 00:16:33.487 { 00:16:33.487 "name": null, 00:16:33.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.487 "is_configured": false, 00:16:33.487 "data_offset": 2048, 00:16:33.487 "data_size": 63488 00:16:33.487 }, 00:16:33.487 { 00:16:33.487 "name": "pt2", 00:16:33.487 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.487 "is_configured": true, 00:16:33.487 "data_offset": 2048, 00:16:33.487 "data_size": 63488 00:16:33.487 } 00:16:33.487 ] 00:16:33.487 }' 00:16:33.487 04:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:33.487 04:11:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.424 04:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:34.424 04:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:34.424 04:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:16:34.424 04:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:34.424 04:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:16:34.682 [2024-07-23 04:11:43.287094] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.682 04:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' ecea8324-040b-4ad0-9951-e25cfe9f9967 '!=' ecea8324-040b-4ad0-9951-e25cfe9f9967 ']' 00:16:34.682 04:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2637923 00:16:34.682 04:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2637923 ']' 00:16:34.682 04:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2637923 00:16:34.682 04:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:16:34.682 04:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:34.682 04:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2637923 00:16:34.682 04:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:34.682 04:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:34.682 04:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2637923' 00:16:34.682 killing process with pid 2637923 00:16:34.682 04:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2637923 00:16:34.682 [2024-07-23 04:11:43.364433] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:34.682 [2024-07-23 04:11:43.364531] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:34.682 [2024-07-23 04:11:43.364587] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:34.682 04:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2637923 00:16:34.682 [2024-07-23 04:11:43.364606] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:16:34.941 [2024-07-23 04:11:43.560740] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:36.847 04:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:16:36.847 00:16:36.847 real 0m16.758s 00:16:36.847 user 0m28.706s 00:16:36.847 sys 0m2.912s 00:16:36.847 04:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:36.847 04:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.847 ************************************ 00:16:36.847 END TEST raid_superblock_test 00:16:36.847 ************************************ 00:16:36.847 04:11:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:36.847 04:11:45 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:16:36.847 04:11:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:36.847 04:11:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:36.847 04:11:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:36.847 ************************************ 00:16:36.847 START TEST raid_read_error_test 00:16:36.847 ************************************ 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 read 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.9N8cIlK5zP 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:36.847 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2641087 00:16:36.848 04:11:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2641087 /var/tmp/spdk-raid.sock 00:16:36.848 04:11:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2641087 ']' 00:16:36.848 04:11:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:36.848 04:11:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:36.848 04:11:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:36.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:36.848 04:11:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:36.848 04:11:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.848 [2024-07-23 04:11:45.393701] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:36.848 [2024-07-23 04:11:45.393822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2641087 ] 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:36.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:36.848 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:36.848 [2024-07-23 04:11:45.621211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.417 [2024-07-23 04:11:45.906843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.676 [2024-07-23 04:11:46.249379] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:37.676 [2024-07-23 04:11:46.249415] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:37.676 04:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.676 04:11:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:37.676 04:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:37.676 04:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:37.935 BaseBdev1_malloc 00:16:37.936 04:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:38.195 true 00:16:38.195 04:11:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:38.454 [2024-07-23 04:11:47.139656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:38.454 [2024-07-23 04:11:47.139717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.454 [2024-07-23 04:11:47.139745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:16:38.454 [2024-07-23 04:11:47.139767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.454 [2024-07-23 04:11:47.142556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.454 [2024-07-23 04:11:47.142595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:38.454 BaseBdev1 00:16:38.454 04:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:38.454 04:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:38.713 BaseBdev2_malloc 00:16:38.713 04:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:38.972 true 00:16:38.972 04:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:39.231 [2024-07-23 04:11:47.859336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:39.231 [2024-07-23 04:11:47.859392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.231 [2024-07-23 04:11:47.859419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:16:39.231 [2024-07-23 04:11:47.859441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.231 [2024-07-23 04:11:47.862190] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.231 [2024-07-23 04:11:47.862227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:39.231 BaseBdev2 00:16:39.231 04:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:39.490 [2024-07-23 04:11:48.083990] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:39.490 [2024-07-23 04:11:48.086307] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:39.490 [2024-07-23 04:11:48.086560] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040e80 00:16:39.490 [2024-07-23 04:11:48.086585] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:39.490 [2024-07-23 04:11:48.086897] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:16:39.490 [2024-07-23 04:11:48.087147] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040e80 00:16:39.490 [2024-07-23 04:11:48.087163] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040e80 00:16:39.490 [2024-07-23 04:11:48.087355] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.490 04:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:39.490 04:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:39.490 04:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:39.490 04:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:39.490 04:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:39.490 04:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:39.490 04:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:39.490 04:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:39.490 04:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:39.490 04:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:39.491 04:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.491 04:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.750 04:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:39.750 "name": "raid_bdev1", 00:16:39.750 "uuid": "219c7a2d-9e2c-4368-967a-efaba7934ae9", 00:16:39.750 "strip_size_kb": 0, 00:16:39.750 "state": "online", 00:16:39.750 "raid_level": "raid1", 00:16:39.750 "superblock": true, 00:16:39.750 "num_base_bdevs": 2, 00:16:39.750 "num_base_bdevs_discovered": 2, 00:16:39.750 "num_base_bdevs_operational": 2, 00:16:39.750 "base_bdevs_list": [ 00:16:39.750 { 00:16:39.750 "name": "BaseBdev1", 00:16:39.750 "uuid": "391d0b12-115c-5aae-9a8e-a41d4151f977", 00:16:39.750 "is_configured": true, 00:16:39.750 "data_offset": 2048, 00:16:39.750 "data_size": 63488 00:16:39.750 }, 00:16:39.750 { 00:16:39.750 "name": "BaseBdev2", 00:16:39.750 "uuid": "7f551203-3071-5073-9952-64bab4442025", 00:16:39.750 "is_configured": true, 00:16:39.750 "data_offset": 2048, 00:16:39.750 "data_size": 63488 00:16:39.750 } 00:16:39.750 ] 00:16:39.750 }' 00:16:39.750 04:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:39.750 04:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.318 04:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:40.318 04:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:40.318 [2024-07-23 04:11:49.008471] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:16:41.256 04:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:41.515 04:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:41.515 04:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:41.515 04:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:16:41.515 04:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:16:41.515 04:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:41.515 04:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:41.515 04:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:41.515 04:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:41.515 04:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:41.515 04:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:41.515 04:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:41.516 04:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:41.516 04:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:41.516 04:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:41.516 04:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.516 04:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.775 04:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:41.775 "name": "raid_bdev1", 00:16:41.775 "uuid": "219c7a2d-9e2c-4368-967a-efaba7934ae9", 00:16:41.775 "strip_size_kb": 0, 00:16:41.775 "state": "online", 00:16:41.775 "raid_level": "raid1", 00:16:41.775 "superblock": true, 00:16:41.775 "num_base_bdevs": 2, 00:16:41.775 "num_base_bdevs_discovered": 2, 00:16:41.775 "num_base_bdevs_operational": 2, 00:16:41.775 "base_bdevs_list": [ 00:16:41.775 { 00:16:41.775 "name": "BaseBdev1", 00:16:41.775 "uuid": "391d0b12-115c-5aae-9a8e-a41d4151f977", 00:16:41.775 "is_configured": true, 00:16:41.775 "data_offset": 2048, 00:16:41.775 "data_size": 63488 00:16:41.775 }, 00:16:41.775 { 00:16:41.775 "name": "BaseBdev2", 00:16:41.775 "uuid": "7f551203-3071-5073-9952-64bab4442025", 00:16:41.775 "is_configured": true, 00:16:41.775 "data_offset": 2048, 00:16:41.775 "data_size": 63488 00:16:41.775 } 00:16:41.775 ] 00:16:41.775 }' 00:16:41.775 04:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:41.775 04:11:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.344 04:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:42.603 [2024-07-23 04:11:51.161351] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:42.604 [2024-07-23 04:11:51.161393] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:42.604 [2024-07-23 04:11:51.164618] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.604 [2024-07-23 04:11:51.164672] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.604 [2024-07-23 04:11:51.164769] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:42.604 [2024-07-23 04:11:51.164797] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state offline 00:16:42.604 0 00:16:42.604 04:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2641087 00:16:42.604 04:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2641087 ']' 00:16:42.604 04:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2641087 00:16:42.604 04:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:16:42.604 04:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:42.604 04:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2641087 00:16:42.604 04:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:42.604 04:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:42.604 04:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2641087' 00:16:42.604 killing process with pid 2641087 00:16:42.604 04:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2641087 00:16:42.604 [2024-07-23 04:11:51.240814] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:42.604 04:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2641087 00:16:42.604 [2024-07-23 04:11:51.343842] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:44.552 04:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.9N8cIlK5zP 00:16:44.552 04:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:44.552 04:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:44.552 04:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:16:44.552 04:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:16:44.552 04:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:44.552 04:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:44.552 04:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:44.552 00:16:44.552 real 0m7.877s 00:16:44.552 user 0m10.988s 00:16:44.552 sys 0m1.176s 00:16:44.552 04:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:44.552 04:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.552 ************************************ 00:16:44.552 END TEST raid_read_error_test 00:16:44.552 ************************************ 00:16:44.552 04:11:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:44.552 04:11:53 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:16:44.552 04:11:53 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:44.552 04:11:53 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:44.552 04:11:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:44.552 ************************************ 00:16:44.552 START TEST raid_write_error_test 00:16:44.552 ************************************ 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 write 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.y2fqDYTHhS 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2642508 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2642508 /var/tmp/spdk-raid.sock 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2642508 ']' 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:44.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:44.552 04:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.811 [2024-07-23 04:11:53.357532] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:44.811 [2024-07-23 04:11:53.357655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2642508 ] 00:16:44.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.811 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:44.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.811 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:44.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.811 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:44.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.811 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:44.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.811 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:44.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.811 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:44.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.811 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:44.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.811 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:44.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.811 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:44.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.811 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:44.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.811 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:44.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.811 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:44.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.811 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:44.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.811 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:44.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.811 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:44.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.811 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:44.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.811 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:44.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.811 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:44.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.811 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:44.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.811 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:44.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.811 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:44.812 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.812 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:44.812 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.812 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:44.812 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.812 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:44.812 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.812 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:44.812 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.812 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:44.812 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.812 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:44.812 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.812 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:44.812 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.812 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:44.812 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.812 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:44.812 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.812 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:44.812 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:44.812 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:44.812 [2024-07-23 04:11:53.581723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.379 [2024-07-23 04:11:53.866122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.637 [2024-07-23 04:11:54.214957] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.637 [2024-07-23 04:11:54.214994] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.637 04:11:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.637 04:11:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:45.637 04:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:45.638 04:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:45.897 BaseBdev1_malloc 00:16:45.897 04:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:46.156 true 00:16:46.156 04:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:46.415 [2024-07-23 04:11:55.099938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:46.415 [2024-07-23 04:11:55.099999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.415 [2024-07-23 04:11:55.100027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:16:46.415 [2024-07-23 04:11:55.100048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.415 [2024-07-23 04:11:55.102850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.415 [2024-07-23 04:11:55.102890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:46.415 BaseBdev1 00:16:46.415 04:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:46.415 04:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:46.675 BaseBdev2_malloc 00:16:46.675 04:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:46.934 true 00:16:46.934 04:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:47.193 [2024-07-23 04:11:55.806040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:47.193 [2024-07-23 04:11:55.806097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.193 [2024-07-23 04:11:55.806124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:16:47.193 [2024-07-23 04:11:55.806151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.193 [2024-07-23 04:11:55.808871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.193 [2024-07-23 04:11:55.808908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:47.193 BaseBdev2 00:16:47.193 04:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:47.453 [2024-07-23 04:11:56.018680] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.453 [2024-07-23 04:11:56.021040] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:47.453 [2024-07-23 04:11:56.021302] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040e80 00:16:47.453 [2024-07-23 04:11:56.021328] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:47.453 [2024-07-23 04:11:56.021658] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:16:47.453 [2024-07-23 04:11:56.021913] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040e80 00:16:47.453 [2024-07-23 04:11:56.021928] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040e80 00:16:47.453 [2024-07-23 04:11:56.022133] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.453 04:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:47.453 04:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:47.453 04:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:47.453 04:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:47.453 04:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:47.453 04:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:47.453 04:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:47.453 04:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:47.453 04:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:47.453 04:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:47.453 04:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.453 04:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.712 04:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:47.712 "name": "raid_bdev1", 00:16:47.712 "uuid": "f71d958b-8823-47e6-aa89-89739066c9d5", 00:16:47.712 "strip_size_kb": 0, 00:16:47.712 "state": "online", 00:16:47.712 "raid_level": "raid1", 00:16:47.712 "superblock": true, 00:16:47.712 "num_base_bdevs": 2, 00:16:47.712 "num_base_bdevs_discovered": 2, 00:16:47.712 "num_base_bdevs_operational": 2, 00:16:47.712 "base_bdevs_list": [ 00:16:47.712 { 00:16:47.712 "name": "BaseBdev1", 00:16:47.712 "uuid": "05afdf23-068d-5864-8190-38b2b145f0ed", 00:16:47.712 "is_configured": true, 00:16:47.712 "data_offset": 2048, 00:16:47.712 "data_size": 63488 00:16:47.712 }, 00:16:47.712 { 00:16:47.712 "name": "BaseBdev2", 00:16:47.712 "uuid": "e3ff049d-9850-5190-a7e9-1c57c8be332e", 00:16:47.712 "is_configured": true, 00:16:47.712 "data_offset": 2048, 00:16:47.712 "data_size": 63488 00:16:47.712 } 00:16:47.712 ] 00:16:47.712 }' 00:16:47.712 04:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:47.712 04:11:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.281 04:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:48.281 04:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:48.281 [2024-07-23 04:11:56.935086] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:16:49.218 04:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:49.478 [2024-07-23 04:11:58.049422] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:16:49.478 [2024-07-23 04:11:58.049483] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:49.478 [2024-07-23 04:11:58.049691] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000010710 00:16:49.478 04:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:49.478 04:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:49.478 04:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:16:49.478 04:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:16:49.478 04:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:49.478 04:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:49.478 04:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:49.478 04:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:49.478 04:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:49.478 04:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:49.478 04:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:49.478 04:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:49.478 04:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:49.478 04:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:49.478 04:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.478 04:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.738 04:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:49.738 "name": "raid_bdev1", 00:16:49.738 "uuid": "f71d958b-8823-47e6-aa89-89739066c9d5", 00:16:49.738 "strip_size_kb": 0, 00:16:49.738 "state": "online", 00:16:49.738 "raid_level": "raid1", 00:16:49.738 "superblock": true, 00:16:49.738 "num_base_bdevs": 2, 00:16:49.738 "num_base_bdevs_discovered": 1, 00:16:49.738 "num_base_bdevs_operational": 1, 00:16:49.738 "base_bdevs_list": [ 00:16:49.738 { 00:16:49.738 "name": null, 00:16:49.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.738 "is_configured": false, 00:16:49.738 "data_offset": 2048, 00:16:49.738 "data_size": 63488 00:16:49.738 }, 00:16:49.738 { 00:16:49.738 "name": "BaseBdev2", 00:16:49.738 "uuid": "e3ff049d-9850-5190-a7e9-1c57c8be332e", 00:16:49.738 "is_configured": true, 00:16:49.738 "data_offset": 2048, 00:16:49.738 "data_size": 63488 00:16:49.738 } 00:16:49.738 ] 00:16:49.738 }' 00:16:49.738 04:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:49.738 04:11:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.306 04:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:50.306 [2024-07-23 04:11:59.069070] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:50.306 [2024-07-23 04:11:59.069113] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:50.306 [2024-07-23 04:11:59.072353] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.306 [2024-07-23 04:11:59.072401] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.306 [2024-07-23 04:11:59.072471] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:50.306 [2024-07-23 04:11:59.072487] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state offline 00:16:50.306 0 00:16:50.306 04:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2642508 00:16:50.306 04:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2642508 ']' 00:16:50.565 04:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2642508 00:16:50.565 04:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:16:50.565 04:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:50.565 04:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2642508 00:16:50.565 04:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:50.565 04:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:50.565 04:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2642508' 00:16:50.565 killing process with pid 2642508 00:16:50.565 04:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2642508 00:16:50.565 [2024-07-23 04:11:59.145855] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:50.565 04:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2642508 00:16:50.565 [2024-07-23 04:11:59.244638] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:52.470 04:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:52.470 04:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.y2fqDYTHhS 00:16:52.470 04:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:52.470 04:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:16:52.470 04:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:16:52.470 04:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:52.470 04:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:52.470 04:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:52.470 00:16:52.470 real 0m7.738s 00:16:52.470 user 0m10.743s 00:16:52.470 sys 0m1.175s 00:16:52.470 04:12:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:52.470 04:12:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.470 ************************************ 00:16:52.470 END TEST raid_write_error_test 00:16:52.470 ************************************ 00:16:52.470 04:12:01 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:52.470 04:12:01 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:16:52.470 04:12:01 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:16:52.470 04:12:01 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:16:52.470 04:12:01 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:52.470 04:12:01 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:52.470 04:12:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:52.470 ************************************ 00:16:52.470 START TEST raid_state_function_test 00:16:52.470 ************************************ 00:16:52.470 04:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 false 00:16:52.470 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2644000 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2644000' 00:16:52.471 Process raid pid: 2644000 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2644000 /var/tmp/spdk-raid.sock 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2644000 ']' 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:52.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:52.471 04:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.471 [2024-07-23 04:12:01.185271] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:16:52.471 [2024-07-23 04:12:01.185384] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:52.730 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:52.730 [2024-07-23 04:12:01.412427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.988 [2024-07-23 04:12:01.702109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.560 [2024-07-23 04:12:02.055302] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.560 [2024-07-23 04:12:02.055344] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.560 04:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:53.560 04:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:16:53.560 04:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:53.820 [2024-07-23 04:12:02.393206] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:53.820 [2024-07-23 04:12:02.393257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:53.820 [2024-07-23 04:12:02.393271] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:53.820 [2024-07-23 04:12:02.393288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:53.820 [2024-07-23 04:12:02.393299] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:53.820 [2024-07-23 04:12:02.393314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:53.820 04:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:53.820 04:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:53.820 04:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:53.820 04:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:53.820 04:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:53.820 04:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:53.820 04:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:53.820 04:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:53.820 04:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:53.820 04:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:53.820 04:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.820 04:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.079 04:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:54.079 "name": "Existed_Raid", 00:16:54.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.079 "strip_size_kb": 64, 00:16:54.079 "state": "configuring", 00:16:54.079 "raid_level": "raid0", 00:16:54.079 "superblock": false, 00:16:54.079 "num_base_bdevs": 3, 00:16:54.079 "num_base_bdevs_discovered": 0, 00:16:54.079 "num_base_bdevs_operational": 3, 00:16:54.079 "base_bdevs_list": [ 00:16:54.079 { 00:16:54.079 "name": "BaseBdev1", 00:16:54.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.079 "is_configured": false, 00:16:54.079 "data_offset": 0, 00:16:54.079 "data_size": 0 00:16:54.079 }, 00:16:54.079 { 00:16:54.079 "name": "BaseBdev2", 00:16:54.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.079 "is_configured": false, 00:16:54.079 "data_offset": 0, 00:16:54.079 "data_size": 0 00:16:54.079 }, 00:16:54.079 { 00:16:54.079 "name": "BaseBdev3", 00:16:54.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.079 "is_configured": false, 00:16:54.079 "data_offset": 0, 00:16:54.079 "data_size": 0 00:16:54.079 } 00:16:54.079 ] 00:16:54.079 }' 00:16:54.079 04:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:54.079 04:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.645 04:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:54.645 [2024-07-23 04:12:03.411808] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:54.645 [2024-07-23 04:12:03.411850] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:16:54.645 04:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:54.904 [2024-07-23 04:12:03.576315] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:54.904 [2024-07-23 04:12:03.576357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:54.904 [2024-07-23 04:12:03.576374] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:54.904 [2024-07-23 04:12:03.576393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:54.904 [2024-07-23 04:12:03.576405] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:54.904 [2024-07-23 04:12:03.576420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:54.904 04:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:55.163 [2024-07-23 04:12:03.861923] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:55.163 BaseBdev1 00:16:55.163 04:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:55.163 04:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:55.163 04:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:55.163 04:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:55.163 04:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:55.163 04:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:55.163 04:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:55.426 04:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:55.715 [ 00:16:55.715 { 00:16:55.715 "name": "BaseBdev1", 00:16:55.715 "aliases": [ 00:16:55.715 "1c5645c9-386e-4d36-806a-0a7bb480608b" 00:16:55.715 ], 00:16:55.715 "product_name": "Malloc disk", 00:16:55.715 "block_size": 512, 00:16:55.715 "num_blocks": 65536, 00:16:55.715 "uuid": "1c5645c9-386e-4d36-806a-0a7bb480608b", 00:16:55.715 "assigned_rate_limits": { 00:16:55.715 "rw_ios_per_sec": 0, 00:16:55.715 "rw_mbytes_per_sec": 0, 00:16:55.715 "r_mbytes_per_sec": 0, 00:16:55.715 "w_mbytes_per_sec": 0 00:16:55.715 }, 00:16:55.715 "claimed": true, 00:16:55.715 "claim_type": "exclusive_write", 00:16:55.715 "zoned": false, 00:16:55.715 "supported_io_types": { 00:16:55.715 "read": true, 00:16:55.715 "write": true, 00:16:55.715 "unmap": true, 00:16:55.715 "flush": true, 00:16:55.715 "reset": true, 00:16:55.715 "nvme_admin": false, 00:16:55.715 "nvme_io": false, 00:16:55.715 "nvme_io_md": false, 00:16:55.715 "write_zeroes": true, 00:16:55.715 "zcopy": true, 00:16:55.715 "get_zone_info": false, 00:16:55.715 "zone_management": false, 00:16:55.715 "zone_append": false, 00:16:55.715 "compare": false, 00:16:55.715 "compare_and_write": false, 00:16:55.715 "abort": true, 00:16:55.715 "seek_hole": false, 00:16:55.715 "seek_data": false, 00:16:55.715 "copy": true, 00:16:55.715 "nvme_iov_md": false 00:16:55.715 }, 00:16:55.715 "memory_domains": [ 00:16:55.715 { 00:16:55.715 "dma_device_id": "system", 00:16:55.715 "dma_device_type": 1 00:16:55.715 }, 00:16:55.715 { 00:16:55.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.715 "dma_device_type": 2 00:16:55.715 } 00:16:55.715 ], 00:16:55.715 "driver_specific": {} 00:16:55.715 } 00:16:55.715 ] 00:16:55.715 04:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:55.715 04:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:55.715 04:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:55.715 04:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:55.715 04:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:55.715 04:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:55.715 04:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:55.715 04:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:55.715 04:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:55.715 04:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:55.715 04:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:55.715 04:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.715 04:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.974 04:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:55.974 "name": "Existed_Raid", 00:16:55.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.974 "strip_size_kb": 64, 00:16:55.974 "state": "configuring", 00:16:55.975 "raid_level": "raid0", 00:16:55.975 "superblock": false, 00:16:55.975 "num_base_bdevs": 3, 00:16:55.975 "num_base_bdevs_discovered": 1, 00:16:55.975 "num_base_bdevs_operational": 3, 00:16:55.975 "base_bdevs_list": [ 00:16:55.975 { 00:16:55.975 "name": "BaseBdev1", 00:16:55.975 "uuid": "1c5645c9-386e-4d36-806a-0a7bb480608b", 00:16:55.975 "is_configured": true, 00:16:55.975 "data_offset": 0, 00:16:55.975 "data_size": 65536 00:16:55.975 }, 00:16:55.975 { 00:16:55.975 "name": "BaseBdev2", 00:16:55.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.975 "is_configured": false, 00:16:55.975 "data_offset": 0, 00:16:55.975 "data_size": 0 00:16:55.975 }, 00:16:55.975 { 00:16:55.975 "name": "BaseBdev3", 00:16:55.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.975 "is_configured": false, 00:16:55.975 "data_offset": 0, 00:16:55.975 "data_size": 0 00:16:55.975 } 00:16:55.975 ] 00:16:55.975 }' 00:16:55.975 04:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:55.975 04:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.543 04:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:56.802 [2024-07-23 04:12:05.357996] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:56.802 [2024-07-23 04:12:05.358050] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:16:56.802 04:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:57.061 [2024-07-23 04:12:05.586695] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:57.061 [2024-07-23 04:12:05.588971] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:57.061 [2024-07-23 04:12:05.589011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:57.061 [2024-07-23 04:12:05.589025] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:57.061 [2024-07-23 04:12:05.589042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:57.061 04:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:57.061 04:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:57.061 04:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:57.061 04:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:57.062 04:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:57.062 04:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:57.062 04:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:57.062 04:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:57.062 04:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:57.062 04:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:57.062 04:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:57.062 04:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:57.062 04:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.062 04:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.062 04:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.062 "name": "Existed_Raid", 00:16:57.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.062 "strip_size_kb": 64, 00:16:57.062 "state": "configuring", 00:16:57.062 "raid_level": "raid0", 00:16:57.062 "superblock": false, 00:16:57.062 "num_base_bdevs": 3, 00:16:57.062 "num_base_bdevs_discovered": 1, 00:16:57.062 "num_base_bdevs_operational": 3, 00:16:57.062 "base_bdevs_list": [ 00:16:57.062 { 00:16:57.062 "name": "BaseBdev1", 00:16:57.062 "uuid": "1c5645c9-386e-4d36-806a-0a7bb480608b", 00:16:57.062 "is_configured": true, 00:16:57.062 "data_offset": 0, 00:16:57.062 "data_size": 65536 00:16:57.062 }, 00:16:57.062 { 00:16:57.062 "name": "BaseBdev2", 00:16:57.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.062 "is_configured": false, 00:16:57.062 "data_offset": 0, 00:16:57.062 "data_size": 0 00:16:57.062 }, 00:16:57.062 { 00:16:57.062 "name": "BaseBdev3", 00:16:57.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.062 "is_configured": false, 00:16:57.062 "data_offset": 0, 00:16:57.062 "data_size": 0 00:16:57.062 } 00:16:57.062 ] 00:16:57.062 }' 00:16:57.062 04:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.062 04:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.630 04:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:57.889 [2024-07-23 04:12:06.653520] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:57.889 BaseBdev2 00:16:58.147 04:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:58.147 04:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:58.147 04:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:58.147 04:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:58.147 04:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:58.147 04:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:58.147 04:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:58.147 04:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:58.406 [ 00:16:58.406 { 00:16:58.406 "name": "BaseBdev2", 00:16:58.406 "aliases": [ 00:16:58.406 "edb5c39c-b8ad-4759-9bcb-150ab5b78837" 00:16:58.406 ], 00:16:58.406 "product_name": "Malloc disk", 00:16:58.406 "block_size": 512, 00:16:58.406 "num_blocks": 65536, 00:16:58.406 "uuid": "edb5c39c-b8ad-4759-9bcb-150ab5b78837", 00:16:58.406 "assigned_rate_limits": { 00:16:58.406 "rw_ios_per_sec": 0, 00:16:58.406 "rw_mbytes_per_sec": 0, 00:16:58.406 "r_mbytes_per_sec": 0, 00:16:58.406 "w_mbytes_per_sec": 0 00:16:58.406 }, 00:16:58.406 "claimed": true, 00:16:58.406 "claim_type": "exclusive_write", 00:16:58.406 "zoned": false, 00:16:58.406 "supported_io_types": { 00:16:58.406 "read": true, 00:16:58.406 "write": true, 00:16:58.406 "unmap": true, 00:16:58.406 "flush": true, 00:16:58.406 "reset": true, 00:16:58.406 "nvme_admin": false, 00:16:58.406 "nvme_io": false, 00:16:58.406 "nvme_io_md": false, 00:16:58.406 "write_zeroes": true, 00:16:58.406 "zcopy": true, 00:16:58.406 "get_zone_info": false, 00:16:58.406 "zone_management": false, 00:16:58.406 "zone_append": false, 00:16:58.406 "compare": false, 00:16:58.406 "compare_and_write": false, 00:16:58.406 "abort": true, 00:16:58.406 "seek_hole": false, 00:16:58.406 "seek_data": false, 00:16:58.406 "copy": true, 00:16:58.406 "nvme_iov_md": false 00:16:58.406 }, 00:16:58.406 "memory_domains": [ 00:16:58.406 { 00:16:58.406 "dma_device_id": "system", 00:16:58.406 "dma_device_type": 1 00:16:58.406 }, 00:16:58.406 { 00:16:58.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.406 "dma_device_type": 2 00:16:58.406 } 00:16:58.406 ], 00:16:58.406 "driver_specific": {} 00:16:58.406 } 00:16:58.406 ] 00:16:58.406 04:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:58.406 04:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:58.406 04:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:58.406 04:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:58.406 04:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:58.406 04:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:58.406 04:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:58.406 04:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:58.406 04:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:58.406 04:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:58.406 04:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:58.406 04:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:58.406 04:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:58.406 04:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.406 04:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.665 04:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:58.665 "name": "Existed_Raid", 00:16:58.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.665 "strip_size_kb": 64, 00:16:58.665 "state": "configuring", 00:16:58.665 "raid_level": "raid0", 00:16:58.665 "superblock": false, 00:16:58.665 "num_base_bdevs": 3, 00:16:58.665 "num_base_bdevs_discovered": 2, 00:16:58.665 "num_base_bdevs_operational": 3, 00:16:58.665 "base_bdevs_list": [ 00:16:58.665 { 00:16:58.665 "name": "BaseBdev1", 00:16:58.665 "uuid": "1c5645c9-386e-4d36-806a-0a7bb480608b", 00:16:58.665 "is_configured": true, 00:16:58.665 "data_offset": 0, 00:16:58.665 "data_size": 65536 00:16:58.665 }, 00:16:58.665 { 00:16:58.665 "name": "BaseBdev2", 00:16:58.665 "uuid": "edb5c39c-b8ad-4759-9bcb-150ab5b78837", 00:16:58.665 "is_configured": true, 00:16:58.665 "data_offset": 0, 00:16:58.665 "data_size": 65536 00:16:58.665 }, 00:16:58.665 { 00:16:58.665 "name": "BaseBdev3", 00:16:58.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.665 "is_configured": false, 00:16:58.665 "data_offset": 0, 00:16:58.665 "data_size": 0 00:16:58.665 } 00:16:58.665 ] 00:16:58.665 }' 00:16:58.665 04:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:58.665 04:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.233 04:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:59.491 [2024-07-23 04:12:08.178193] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:59.491 [2024-07-23 04:12:08.178239] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:16:59.491 [2024-07-23 04:12:08.178259] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:59.491 [2024-07-23 04:12:08.178584] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:16:59.491 [2024-07-23 04:12:08.178823] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:16:59.491 [2024-07-23 04:12:08.178838] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:16:59.491 [2024-07-23 04:12:08.179162] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.491 BaseBdev3 00:16:59.491 04:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:59.491 04:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:59.491 04:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:59.491 04:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:59.491 04:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:59.491 04:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:59.491 04:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:59.750 04:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:00.008 [ 00:17:00.008 { 00:17:00.008 "name": "BaseBdev3", 00:17:00.008 "aliases": [ 00:17:00.008 "ae11385f-df5b-4ab6-bb4a-27e208d35366" 00:17:00.008 ], 00:17:00.008 "product_name": "Malloc disk", 00:17:00.008 "block_size": 512, 00:17:00.008 "num_blocks": 65536, 00:17:00.008 "uuid": "ae11385f-df5b-4ab6-bb4a-27e208d35366", 00:17:00.008 "assigned_rate_limits": { 00:17:00.008 "rw_ios_per_sec": 0, 00:17:00.008 "rw_mbytes_per_sec": 0, 00:17:00.008 "r_mbytes_per_sec": 0, 00:17:00.008 "w_mbytes_per_sec": 0 00:17:00.008 }, 00:17:00.008 "claimed": true, 00:17:00.008 "claim_type": "exclusive_write", 00:17:00.008 "zoned": false, 00:17:00.008 "supported_io_types": { 00:17:00.008 "read": true, 00:17:00.008 "write": true, 00:17:00.008 "unmap": true, 00:17:00.008 "flush": true, 00:17:00.008 "reset": true, 00:17:00.008 "nvme_admin": false, 00:17:00.008 "nvme_io": false, 00:17:00.008 "nvme_io_md": false, 00:17:00.008 "write_zeroes": true, 00:17:00.008 "zcopy": true, 00:17:00.008 "get_zone_info": false, 00:17:00.008 "zone_management": false, 00:17:00.008 "zone_append": false, 00:17:00.008 "compare": false, 00:17:00.008 "compare_and_write": false, 00:17:00.009 "abort": true, 00:17:00.009 "seek_hole": false, 00:17:00.009 "seek_data": false, 00:17:00.009 "copy": true, 00:17:00.009 "nvme_iov_md": false 00:17:00.009 }, 00:17:00.009 "memory_domains": [ 00:17:00.009 { 00:17:00.009 "dma_device_id": "system", 00:17:00.009 "dma_device_type": 1 00:17:00.009 }, 00:17:00.009 { 00:17:00.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.009 "dma_device_type": 2 00:17:00.009 } 00:17:00.009 ], 00:17:00.009 "driver_specific": {} 00:17:00.009 } 00:17:00.009 ] 00:17:00.009 04:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:00.009 04:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:00.009 04:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:00.009 04:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:00.009 04:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:00.009 04:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:00.009 04:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:00.009 04:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:00.009 04:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:00.009 04:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:00.009 04:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:00.009 04:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:00.009 04:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:00.009 04:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.009 04:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.267 04:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:00.268 "name": "Existed_Raid", 00:17:00.268 "uuid": "4a9931f4-e968-4b5a-a4c4-a07b481efea0", 00:17:00.268 "strip_size_kb": 64, 00:17:00.268 "state": "online", 00:17:00.268 "raid_level": "raid0", 00:17:00.268 "superblock": false, 00:17:00.268 "num_base_bdevs": 3, 00:17:00.268 "num_base_bdevs_discovered": 3, 00:17:00.268 "num_base_bdevs_operational": 3, 00:17:00.268 "base_bdevs_list": [ 00:17:00.268 { 00:17:00.268 "name": "BaseBdev1", 00:17:00.268 "uuid": "1c5645c9-386e-4d36-806a-0a7bb480608b", 00:17:00.268 "is_configured": true, 00:17:00.268 "data_offset": 0, 00:17:00.268 "data_size": 65536 00:17:00.268 }, 00:17:00.268 { 00:17:00.268 "name": "BaseBdev2", 00:17:00.268 "uuid": "edb5c39c-b8ad-4759-9bcb-150ab5b78837", 00:17:00.268 "is_configured": true, 00:17:00.268 "data_offset": 0, 00:17:00.268 "data_size": 65536 00:17:00.268 }, 00:17:00.268 { 00:17:00.268 "name": "BaseBdev3", 00:17:00.268 "uuid": "ae11385f-df5b-4ab6-bb4a-27e208d35366", 00:17:00.268 "is_configured": true, 00:17:00.268 "data_offset": 0, 00:17:00.268 "data_size": 65536 00:17:00.268 } 00:17:00.268 ] 00:17:00.268 }' 00:17:00.268 04:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:00.268 04:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.835 04:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:00.835 04:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:00.835 04:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:00.835 04:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:00.835 04:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:00.835 04:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:00.835 04:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:00.835 04:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:01.094 [2024-07-23 04:12:09.646594] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:01.094 04:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:01.094 "name": "Existed_Raid", 00:17:01.094 "aliases": [ 00:17:01.094 "4a9931f4-e968-4b5a-a4c4-a07b481efea0" 00:17:01.094 ], 00:17:01.094 "product_name": "Raid Volume", 00:17:01.094 "block_size": 512, 00:17:01.094 "num_blocks": 196608, 00:17:01.094 "uuid": "4a9931f4-e968-4b5a-a4c4-a07b481efea0", 00:17:01.094 "assigned_rate_limits": { 00:17:01.094 "rw_ios_per_sec": 0, 00:17:01.094 "rw_mbytes_per_sec": 0, 00:17:01.094 "r_mbytes_per_sec": 0, 00:17:01.094 "w_mbytes_per_sec": 0 00:17:01.094 }, 00:17:01.094 "claimed": false, 00:17:01.094 "zoned": false, 00:17:01.094 "supported_io_types": { 00:17:01.094 "read": true, 00:17:01.094 "write": true, 00:17:01.094 "unmap": true, 00:17:01.094 "flush": true, 00:17:01.094 "reset": true, 00:17:01.094 "nvme_admin": false, 00:17:01.094 "nvme_io": false, 00:17:01.094 "nvme_io_md": false, 00:17:01.094 "write_zeroes": true, 00:17:01.094 "zcopy": false, 00:17:01.094 "get_zone_info": false, 00:17:01.094 "zone_management": false, 00:17:01.094 "zone_append": false, 00:17:01.094 "compare": false, 00:17:01.094 "compare_and_write": false, 00:17:01.094 "abort": false, 00:17:01.094 "seek_hole": false, 00:17:01.094 "seek_data": false, 00:17:01.094 "copy": false, 00:17:01.094 "nvme_iov_md": false 00:17:01.094 }, 00:17:01.094 "memory_domains": [ 00:17:01.094 { 00:17:01.094 "dma_device_id": "system", 00:17:01.094 "dma_device_type": 1 00:17:01.094 }, 00:17:01.094 { 00:17:01.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.094 "dma_device_type": 2 00:17:01.094 }, 00:17:01.094 { 00:17:01.094 "dma_device_id": "system", 00:17:01.094 "dma_device_type": 1 00:17:01.094 }, 00:17:01.094 { 00:17:01.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.094 "dma_device_type": 2 00:17:01.094 }, 00:17:01.094 { 00:17:01.094 "dma_device_id": "system", 00:17:01.094 "dma_device_type": 1 00:17:01.094 }, 00:17:01.094 { 00:17:01.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.094 "dma_device_type": 2 00:17:01.094 } 00:17:01.094 ], 00:17:01.094 "driver_specific": { 00:17:01.094 "raid": { 00:17:01.094 "uuid": "4a9931f4-e968-4b5a-a4c4-a07b481efea0", 00:17:01.094 "strip_size_kb": 64, 00:17:01.094 "state": "online", 00:17:01.094 "raid_level": "raid0", 00:17:01.094 "superblock": false, 00:17:01.094 "num_base_bdevs": 3, 00:17:01.094 "num_base_bdevs_discovered": 3, 00:17:01.094 "num_base_bdevs_operational": 3, 00:17:01.094 "base_bdevs_list": [ 00:17:01.094 { 00:17:01.094 "name": "BaseBdev1", 00:17:01.094 "uuid": "1c5645c9-386e-4d36-806a-0a7bb480608b", 00:17:01.094 "is_configured": true, 00:17:01.094 "data_offset": 0, 00:17:01.094 "data_size": 65536 00:17:01.094 }, 00:17:01.094 { 00:17:01.094 "name": "BaseBdev2", 00:17:01.094 "uuid": "edb5c39c-b8ad-4759-9bcb-150ab5b78837", 00:17:01.094 "is_configured": true, 00:17:01.094 "data_offset": 0, 00:17:01.094 "data_size": 65536 00:17:01.094 }, 00:17:01.094 { 00:17:01.094 "name": "BaseBdev3", 00:17:01.094 "uuid": "ae11385f-df5b-4ab6-bb4a-27e208d35366", 00:17:01.094 "is_configured": true, 00:17:01.094 "data_offset": 0, 00:17:01.094 "data_size": 65536 00:17:01.094 } 00:17:01.094 ] 00:17:01.094 } 00:17:01.094 } 00:17:01.094 }' 00:17:01.094 04:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:01.094 04:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:01.094 BaseBdev2 00:17:01.094 BaseBdev3' 00:17:01.094 04:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:01.094 04:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:01.094 04:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:01.352 04:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:01.352 "name": "BaseBdev1", 00:17:01.353 "aliases": [ 00:17:01.353 "1c5645c9-386e-4d36-806a-0a7bb480608b" 00:17:01.353 ], 00:17:01.353 "product_name": "Malloc disk", 00:17:01.353 "block_size": 512, 00:17:01.353 "num_blocks": 65536, 00:17:01.353 "uuid": "1c5645c9-386e-4d36-806a-0a7bb480608b", 00:17:01.353 "assigned_rate_limits": { 00:17:01.353 "rw_ios_per_sec": 0, 00:17:01.353 "rw_mbytes_per_sec": 0, 00:17:01.353 "r_mbytes_per_sec": 0, 00:17:01.353 "w_mbytes_per_sec": 0 00:17:01.353 }, 00:17:01.353 "claimed": true, 00:17:01.353 "claim_type": "exclusive_write", 00:17:01.353 "zoned": false, 00:17:01.353 "supported_io_types": { 00:17:01.353 "read": true, 00:17:01.353 "write": true, 00:17:01.353 "unmap": true, 00:17:01.353 "flush": true, 00:17:01.353 "reset": true, 00:17:01.353 "nvme_admin": false, 00:17:01.353 "nvme_io": false, 00:17:01.353 "nvme_io_md": false, 00:17:01.353 "write_zeroes": true, 00:17:01.353 "zcopy": true, 00:17:01.353 "get_zone_info": false, 00:17:01.353 "zone_management": false, 00:17:01.353 "zone_append": false, 00:17:01.353 "compare": false, 00:17:01.353 "compare_and_write": false, 00:17:01.353 "abort": true, 00:17:01.353 "seek_hole": false, 00:17:01.353 "seek_data": false, 00:17:01.353 "copy": true, 00:17:01.353 "nvme_iov_md": false 00:17:01.353 }, 00:17:01.353 "memory_domains": [ 00:17:01.353 { 00:17:01.353 "dma_device_id": "system", 00:17:01.353 "dma_device_type": 1 00:17:01.353 }, 00:17:01.353 { 00:17:01.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.353 "dma_device_type": 2 00:17:01.353 } 00:17:01.353 ], 00:17:01.353 "driver_specific": {} 00:17:01.353 }' 00:17:01.353 04:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.353 04:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.353 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:01.353 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.353 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.353 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:01.353 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.612 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.612 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:01.612 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.612 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.612 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:01.612 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:01.612 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:01.612 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:01.871 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:01.871 "name": "BaseBdev2", 00:17:01.871 "aliases": [ 00:17:01.871 "edb5c39c-b8ad-4759-9bcb-150ab5b78837" 00:17:01.871 ], 00:17:01.871 "product_name": "Malloc disk", 00:17:01.871 "block_size": 512, 00:17:01.871 "num_blocks": 65536, 00:17:01.871 "uuid": "edb5c39c-b8ad-4759-9bcb-150ab5b78837", 00:17:01.871 "assigned_rate_limits": { 00:17:01.871 "rw_ios_per_sec": 0, 00:17:01.871 "rw_mbytes_per_sec": 0, 00:17:01.871 "r_mbytes_per_sec": 0, 00:17:01.871 "w_mbytes_per_sec": 0 00:17:01.871 }, 00:17:01.871 "claimed": true, 00:17:01.871 "claim_type": "exclusive_write", 00:17:01.871 "zoned": false, 00:17:01.871 "supported_io_types": { 00:17:01.871 "read": true, 00:17:01.871 "write": true, 00:17:01.871 "unmap": true, 00:17:01.871 "flush": true, 00:17:01.871 "reset": true, 00:17:01.871 "nvme_admin": false, 00:17:01.871 "nvme_io": false, 00:17:01.871 "nvme_io_md": false, 00:17:01.871 "write_zeroes": true, 00:17:01.871 "zcopy": true, 00:17:01.871 "get_zone_info": false, 00:17:01.871 "zone_management": false, 00:17:01.871 "zone_append": false, 00:17:01.871 "compare": false, 00:17:01.871 "compare_and_write": false, 00:17:01.871 "abort": true, 00:17:01.871 "seek_hole": false, 00:17:01.871 "seek_data": false, 00:17:01.871 "copy": true, 00:17:01.871 "nvme_iov_md": false 00:17:01.871 }, 00:17:01.871 "memory_domains": [ 00:17:01.871 { 00:17:01.871 "dma_device_id": "system", 00:17:01.871 "dma_device_type": 1 00:17:01.871 }, 00:17:01.871 { 00:17:01.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.871 "dma_device_type": 2 00:17:01.871 } 00:17:01.871 ], 00:17:01.871 "driver_specific": {} 00:17:01.871 }' 00:17:01.871 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.871 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.871 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:01.871 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.871 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:02.130 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:02.130 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:02.130 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:02.130 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:02.130 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:02.130 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:02.130 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:02.130 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:02.130 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:02.130 04:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:02.389 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:02.389 "name": "BaseBdev3", 00:17:02.389 "aliases": [ 00:17:02.389 "ae11385f-df5b-4ab6-bb4a-27e208d35366" 00:17:02.389 ], 00:17:02.389 "product_name": "Malloc disk", 00:17:02.389 "block_size": 512, 00:17:02.389 "num_blocks": 65536, 00:17:02.389 "uuid": "ae11385f-df5b-4ab6-bb4a-27e208d35366", 00:17:02.389 "assigned_rate_limits": { 00:17:02.389 "rw_ios_per_sec": 0, 00:17:02.389 "rw_mbytes_per_sec": 0, 00:17:02.389 "r_mbytes_per_sec": 0, 00:17:02.389 "w_mbytes_per_sec": 0 00:17:02.389 }, 00:17:02.389 "claimed": true, 00:17:02.389 "claim_type": "exclusive_write", 00:17:02.389 "zoned": false, 00:17:02.389 "supported_io_types": { 00:17:02.389 "read": true, 00:17:02.389 "write": true, 00:17:02.389 "unmap": true, 00:17:02.389 "flush": true, 00:17:02.389 "reset": true, 00:17:02.389 "nvme_admin": false, 00:17:02.389 "nvme_io": false, 00:17:02.389 "nvme_io_md": false, 00:17:02.389 "write_zeroes": true, 00:17:02.389 "zcopy": true, 00:17:02.389 "get_zone_info": false, 00:17:02.389 "zone_management": false, 00:17:02.389 "zone_append": false, 00:17:02.389 "compare": false, 00:17:02.389 "compare_and_write": false, 00:17:02.389 "abort": true, 00:17:02.389 "seek_hole": false, 00:17:02.389 "seek_data": false, 00:17:02.389 "copy": true, 00:17:02.389 "nvme_iov_md": false 00:17:02.389 }, 00:17:02.389 "memory_domains": [ 00:17:02.389 { 00:17:02.389 "dma_device_id": "system", 00:17:02.389 "dma_device_type": 1 00:17:02.389 }, 00:17:02.389 { 00:17:02.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.389 "dma_device_type": 2 00:17:02.389 } 00:17:02.389 ], 00:17:02.389 "driver_specific": {} 00:17:02.389 }' 00:17:02.389 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:02.389 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:02.389 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:02.389 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:02.651 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:02.651 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:02.651 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:02.651 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:02.651 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:02.651 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:02.651 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:02.651 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:02.651 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:02.912 [2024-07-23 04:12:11.619729] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:02.912 [2024-07-23 04:12:11.619765] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.912 [2024-07-23 04:12:11.619826] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.912 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:02.912 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:17:02.912 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:02.912 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:02.912 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:02.912 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:17:02.912 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:02.912 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:02.912 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:02.912 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:02.912 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:02.912 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:02.912 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:02.912 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:02.912 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:02.912 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.912 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.171 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:03.171 "name": "Existed_Raid", 00:17:03.171 "uuid": "4a9931f4-e968-4b5a-a4c4-a07b481efea0", 00:17:03.171 "strip_size_kb": 64, 00:17:03.171 "state": "offline", 00:17:03.171 "raid_level": "raid0", 00:17:03.171 "superblock": false, 00:17:03.171 "num_base_bdevs": 3, 00:17:03.171 "num_base_bdevs_discovered": 2, 00:17:03.171 "num_base_bdevs_operational": 2, 00:17:03.171 "base_bdevs_list": [ 00:17:03.171 { 00:17:03.171 "name": null, 00:17:03.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.171 "is_configured": false, 00:17:03.171 "data_offset": 0, 00:17:03.171 "data_size": 65536 00:17:03.171 }, 00:17:03.171 { 00:17:03.171 "name": "BaseBdev2", 00:17:03.171 "uuid": "edb5c39c-b8ad-4759-9bcb-150ab5b78837", 00:17:03.171 "is_configured": true, 00:17:03.171 "data_offset": 0, 00:17:03.171 "data_size": 65536 00:17:03.171 }, 00:17:03.171 { 00:17:03.171 "name": "BaseBdev3", 00:17:03.171 "uuid": "ae11385f-df5b-4ab6-bb4a-27e208d35366", 00:17:03.171 "is_configured": true, 00:17:03.171 "data_offset": 0, 00:17:03.171 "data_size": 65536 00:17:03.171 } 00:17:03.171 ] 00:17:03.171 }' 00:17:03.171 04:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:03.171 04:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.739 04:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:03.739 04:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:03.739 04:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.739 04:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:03.999 04:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:03.999 04:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:03.999 04:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:04.258 [2024-07-23 04:12:12.905168] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:04.517 04:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:04.517 04:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:04.517 04:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.517 04:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:04.517 04:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:04.517 04:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:04.517 04:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:04.776 [2024-07-23 04:12:13.488230] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:04.776 [2024-07-23 04:12:13.488290] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:17:05.035 04:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:05.035 04:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:05.035 04:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.035 04:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:05.299 04:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:05.299 04:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:05.299 04:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:17:05.299 04:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:05.299 04:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:05.299 04:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:05.558 BaseBdev2 00:17:05.558 04:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:05.558 04:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:05.558 04:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:05.558 04:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:05.558 04:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:05.558 04:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:05.558 04:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:05.818 04:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:05.818 [ 00:17:05.818 { 00:17:05.818 "name": "BaseBdev2", 00:17:05.818 "aliases": [ 00:17:05.818 "9d900f56-cbea-4db7-8162-d0f7361734e3" 00:17:05.818 ], 00:17:05.818 "product_name": "Malloc disk", 00:17:05.818 "block_size": 512, 00:17:05.818 "num_blocks": 65536, 00:17:05.818 "uuid": "9d900f56-cbea-4db7-8162-d0f7361734e3", 00:17:05.818 "assigned_rate_limits": { 00:17:05.818 "rw_ios_per_sec": 0, 00:17:05.818 "rw_mbytes_per_sec": 0, 00:17:05.818 "r_mbytes_per_sec": 0, 00:17:05.818 "w_mbytes_per_sec": 0 00:17:05.818 }, 00:17:05.818 "claimed": false, 00:17:05.818 "zoned": false, 00:17:05.818 "supported_io_types": { 00:17:05.818 "read": true, 00:17:05.818 "write": true, 00:17:05.818 "unmap": true, 00:17:05.818 "flush": true, 00:17:05.818 "reset": true, 00:17:05.818 "nvme_admin": false, 00:17:05.818 "nvme_io": false, 00:17:05.818 "nvme_io_md": false, 00:17:05.818 "write_zeroes": true, 00:17:05.818 "zcopy": true, 00:17:05.818 "get_zone_info": false, 00:17:05.818 "zone_management": false, 00:17:05.818 "zone_append": false, 00:17:05.818 "compare": false, 00:17:05.818 "compare_and_write": false, 00:17:05.818 "abort": true, 00:17:05.818 "seek_hole": false, 00:17:05.818 "seek_data": false, 00:17:05.818 "copy": true, 00:17:05.818 "nvme_iov_md": false 00:17:05.818 }, 00:17:05.818 "memory_domains": [ 00:17:05.818 { 00:17:05.818 "dma_device_id": "system", 00:17:05.818 "dma_device_type": 1 00:17:05.818 }, 00:17:05.818 { 00:17:05.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.818 "dma_device_type": 2 00:17:05.818 } 00:17:05.818 ], 00:17:05.818 "driver_specific": {} 00:17:05.818 } 00:17:05.818 ] 00:17:05.818 04:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:05.818 04:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:05.818 04:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:05.818 04:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:06.078 BaseBdev3 00:17:06.078 04:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:06.078 04:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:06.078 04:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:06.078 04:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:06.078 04:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:06.078 04:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:06.078 04:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:06.337 04:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:06.596 [ 00:17:06.596 { 00:17:06.596 "name": "BaseBdev3", 00:17:06.596 "aliases": [ 00:17:06.596 "d5a2b493-6829-497d-a824-d99c3b1f88de" 00:17:06.596 ], 00:17:06.596 "product_name": "Malloc disk", 00:17:06.596 "block_size": 512, 00:17:06.596 "num_blocks": 65536, 00:17:06.596 "uuid": "d5a2b493-6829-497d-a824-d99c3b1f88de", 00:17:06.596 "assigned_rate_limits": { 00:17:06.596 "rw_ios_per_sec": 0, 00:17:06.597 "rw_mbytes_per_sec": 0, 00:17:06.597 "r_mbytes_per_sec": 0, 00:17:06.597 "w_mbytes_per_sec": 0 00:17:06.597 }, 00:17:06.597 "claimed": false, 00:17:06.597 "zoned": false, 00:17:06.597 "supported_io_types": { 00:17:06.597 "read": true, 00:17:06.597 "write": true, 00:17:06.597 "unmap": true, 00:17:06.597 "flush": true, 00:17:06.597 "reset": true, 00:17:06.597 "nvme_admin": false, 00:17:06.597 "nvme_io": false, 00:17:06.597 "nvme_io_md": false, 00:17:06.597 "write_zeroes": true, 00:17:06.597 "zcopy": true, 00:17:06.597 "get_zone_info": false, 00:17:06.597 "zone_management": false, 00:17:06.597 "zone_append": false, 00:17:06.597 "compare": false, 00:17:06.597 "compare_and_write": false, 00:17:06.597 "abort": true, 00:17:06.597 "seek_hole": false, 00:17:06.597 "seek_data": false, 00:17:06.597 "copy": true, 00:17:06.597 "nvme_iov_md": false 00:17:06.597 }, 00:17:06.597 "memory_domains": [ 00:17:06.597 { 00:17:06.597 "dma_device_id": "system", 00:17:06.597 "dma_device_type": 1 00:17:06.597 }, 00:17:06.597 { 00:17:06.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.597 "dma_device_type": 2 00:17:06.597 } 00:17:06.597 ], 00:17:06.597 "driver_specific": {} 00:17:06.597 } 00:17:06.597 ] 00:17:06.597 04:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:06.597 04:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:06.597 04:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:06.597 04:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:06.856 [2024-07-23 04:12:15.519885] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:06.856 [2024-07-23 04:12:15.519933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:06.856 [2024-07-23 04:12:15.519964] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.856 [2024-07-23 04:12:15.522307] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:06.856 04:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:06.856 04:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:06.856 04:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:06.856 04:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:06.856 04:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:06.856 04:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:06.856 04:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:06.856 04:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:06.856 04:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:06.856 04:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:06.856 04:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.856 04:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.114 04:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:07.114 "name": "Existed_Raid", 00:17:07.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.114 "strip_size_kb": 64, 00:17:07.114 "state": "configuring", 00:17:07.114 "raid_level": "raid0", 00:17:07.114 "superblock": false, 00:17:07.114 "num_base_bdevs": 3, 00:17:07.114 "num_base_bdevs_discovered": 2, 00:17:07.114 "num_base_bdevs_operational": 3, 00:17:07.114 "base_bdevs_list": [ 00:17:07.114 { 00:17:07.114 "name": "BaseBdev1", 00:17:07.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.115 "is_configured": false, 00:17:07.115 "data_offset": 0, 00:17:07.115 "data_size": 0 00:17:07.115 }, 00:17:07.115 { 00:17:07.115 "name": "BaseBdev2", 00:17:07.115 "uuid": "9d900f56-cbea-4db7-8162-d0f7361734e3", 00:17:07.115 "is_configured": true, 00:17:07.115 "data_offset": 0, 00:17:07.115 "data_size": 65536 00:17:07.115 }, 00:17:07.115 { 00:17:07.115 "name": "BaseBdev3", 00:17:07.115 "uuid": "d5a2b493-6829-497d-a824-d99c3b1f88de", 00:17:07.115 "is_configured": true, 00:17:07.115 "data_offset": 0, 00:17:07.115 "data_size": 65536 00:17:07.115 } 00:17:07.115 ] 00:17:07.115 }' 00:17:07.115 04:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:07.115 04:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.683 04:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:07.942 [2024-07-23 04:12:16.534720] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:07.942 04:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:07.942 04:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:07.942 04:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:07.942 04:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:07.942 04:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:07.942 04:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:07.942 04:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:07.942 04:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:07.942 04:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:07.942 04:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:07.942 04:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.942 04:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.201 04:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:08.201 "name": "Existed_Raid", 00:17:08.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.201 "strip_size_kb": 64, 00:17:08.201 "state": "configuring", 00:17:08.201 "raid_level": "raid0", 00:17:08.201 "superblock": false, 00:17:08.201 "num_base_bdevs": 3, 00:17:08.201 "num_base_bdevs_discovered": 1, 00:17:08.201 "num_base_bdevs_operational": 3, 00:17:08.201 "base_bdevs_list": [ 00:17:08.201 { 00:17:08.201 "name": "BaseBdev1", 00:17:08.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.201 "is_configured": false, 00:17:08.201 "data_offset": 0, 00:17:08.201 "data_size": 0 00:17:08.201 }, 00:17:08.201 { 00:17:08.201 "name": null, 00:17:08.201 "uuid": "9d900f56-cbea-4db7-8162-d0f7361734e3", 00:17:08.201 "is_configured": false, 00:17:08.201 "data_offset": 0, 00:17:08.201 "data_size": 65536 00:17:08.201 }, 00:17:08.201 { 00:17:08.201 "name": "BaseBdev3", 00:17:08.201 "uuid": "d5a2b493-6829-497d-a824-d99c3b1f88de", 00:17:08.201 "is_configured": true, 00:17:08.201 "data_offset": 0, 00:17:08.202 "data_size": 65536 00:17:08.202 } 00:17:08.202 ] 00:17:08.202 }' 00:17:08.202 04:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:08.202 04:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.770 04:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.770 04:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:08.770 04:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:08.770 04:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:09.030 [2024-07-23 04:12:17.789308] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:09.030 BaseBdev1 00:17:09.030 04:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:09.030 04:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:09.030 04:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:09.030 04:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:09.030 04:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:09.030 04:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:09.030 04:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:09.289 04:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:09.548 [ 00:17:09.548 { 00:17:09.548 "name": "BaseBdev1", 00:17:09.548 "aliases": [ 00:17:09.548 "ec1268f4-1c65-4bfa-8f09-33840973ba14" 00:17:09.548 ], 00:17:09.548 "product_name": "Malloc disk", 00:17:09.548 "block_size": 512, 00:17:09.548 "num_blocks": 65536, 00:17:09.548 "uuid": "ec1268f4-1c65-4bfa-8f09-33840973ba14", 00:17:09.548 "assigned_rate_limits": { 00:17:09.548 "rw_ios_per_sec": 0, 00:17:09.548 "rw_mbytes_per_sec": 0, 00:17:09.548 "r_mbytes_per_sec": 0, 00:17:09.548 "w_mbytes_per_sec": 0 00:17:09.548 }, 00:17:09.548 "claimed": true, 00:17:09.548 "claim_type": "exclusive_write", 00:17:09.548 "zoned": false, 00:17:09.548 "supported_io_types": { 00:17:09.548 "read": true, 00:17:09.548 "write": true, 00:17:09.548 "unmap": true, 00:17:09.548 "flush": true, 00:17:09.548 "reset": true, 00:17:09.548 "nvme_admin": false, 00:17:09.548 "nvme_io": false, 00:17:09.548 "nvme_io_md": false, 00:17:09.548 "write_zeroes": true, 00:17:09.548 "zcopy": true, 00:17:09.548 "get_zone_info": false, 00:17:09.548 "zone_management": false, 00:17:09.548 "zone_append": false, 00:17:09.548 "compare": false, 00:17:09.548 "compare_and_write": false, 00:17:09.548 "abort": true, 00:17:09.548 "seek_hole": false, 00:17:09.548 "seek_data": false, 00:17:09.548 "copy": true, 00:17:09.548 "nvme_iov_md": false 00:17:09.548 }, 00:17:09.548 "memory_domains": [ 00:17:09.548 { 00:17:09.548 "dma_device_id": "system", 00:17:09.548 "dma_device_type": 1 00:17:09.548 }, 00:17:09.548 { 00:17:09.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.548 "dma_device_type": 2 00:17:09.548 } 00:17:09.548 ], 00:17:09.548 "driver_specific": {} 00:17:09.548 } 00:17:09.548 ] 00:17:09.548 04:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:09.548 04:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:09.548 04:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:09.548 04:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:09.548 04:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:09.548 04:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:09.548 04:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:09.548 04:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:09.548 04:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:09.548 04:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:09.548 04:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:09.548 04:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.548 04:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.807 04:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:09.807 "name": "Existed_Raid", 00:17:09.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.807 "strip_size_kb": 64, 00:17:09.807 "state": "configuring", 00:17:09.807 "raid_level": "raid0", 00:17:09.808 "superblock": false, 00:17:09.808 "num_base_bdevs": 3, 00:17:09.808 "num_base_bdevs_discovered": 2, 00:17:09.808 "num_base_bdevs_operational": 3, 00:17:09.808 "base_bdevs_list": [ 00:17:09.808 { 00:17:09.808 "name": "BaseBdev1", 00:17:09.808 "uuid": "ec1268f4-1c65-4bfa-8f09-33840973ba14", 00:17:09.808 "is_configured": true, 00:17:09.808 "data_offset": 0, 00:17:09.808 "data_size": 65536 00:17:09.808 }, 00:17:09.808 { 00:17:09.808 "name": null, 00:17:09.808 "uuid": "9d900f56-cbea-4db7-8162-d0f7361734e3", 00:17:09.808 "is_configured": false, 00:17:09.808 "data_offset": 0, 00:17:09.808 "data_size": 65536 00:17:09.808 }, 00:17:09.808 { 00:17:09.808 "name": "BaseBdev3", 00:17:09.808 "uuid": "d5a2b493-6829-497d-a824-d99c3b1f88de", 00:17:09.808 "is_configured": true, 00:17:09.808 "data_offset": 0, 00:17:09.808 "data_size": 65536 00:17:09.808 } 00:17:09.808 ] 00:17:09.808 }' 00:17:09.808 04:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:09.808 04:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.375 04:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.375 04:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:10.634 04:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:10.634 04:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:10.894 [2024-07-23 04:12:19.477994] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:10.894 04:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:10.894 04:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:10.894 04:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:10.894 04:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:10.894 04:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:10.894 04:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:10.894 04:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:10.894 04:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:10.894 04:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:10.894 04:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:10.894 04:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.894 04:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.153 04:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.153 "name": "Existed_Raid", 00:17:11.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.153 "strip_size_kb": 64, 00:17:11.153 "state": "configuring", 00:17:11.153 "raid_level": "raid0", 00:17:11.153 "superblock": false, 00:17:11.153 "num_base_bdevs": 3, 00:17:11.153 "num_base_bdevs_discovered": 1, 00:17:11.153 "num_base_bdevs_operational": 3, 00:17:11.153 "base_bdevs_list": [ 00:17:11.153 { 00:17:11.153 "name": "BaseBdev1", 00:17:11.153 "uuid": "ec1268f4-1c65-4bfa-8f09-33840973ba14", 00:17:11.153 "is_configured": true, 00:17:11.153 "data_offset": 0, 00:17:11.153 "data_size": 65536 00:17:11.153 }, 00:17:11.153 { 00:17:11.153 "name": null, 00:17:11.153 "uuid": "9d900f56-cbea-4db7-8162-d0f7361734e3", 00:17:11.153 "is_configured": false, 00:17:11.153 "data_offset": 0, 00:17:11.153 "data_size": 65536 00:17:11.153 }, 00:17:11.153 { 00:17:11.153 "name": null, 00:17:11.153 "uuid": "d5a2b493-6829-497d-a824-d99c3b1f88de", 00:17:11.153 "is_configured": false, 00:17:11.153 "data_offset": 0, 00:17:11.153 "data_size": 65536 00:17:11.153 } 00:17:11.153 ] 00:17:11.153 }' 00:17:11.153 04:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.153 04:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.718 04:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.718 04:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:11.977 04:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:11.977 04:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:11.977 [2024-07-23 04:12:20.725410] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:11.977 04:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:11.977 04:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:11.977 04:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:11.977 04:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:11.977 04:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:11.977 04:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:11.977 04:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:11.977 04:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:11.977 04:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:11.977 04:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:11.977 04:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.977 04:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.235 04:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:12.235 "name": "Existed_Raid", 00:17:12.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.235 "strip_size_kb": 64, 00:17:12.235 "state": "configuring", 00:17:12.235 "raid_level": "raid0", 00:17:12.235 "superblock": false, 00:17:12.235 "num_base_bdevs": 3, 00:17:12.235 "num_base_bdevs_discovered": 2, 00:17:12.235 "num_base_bdevs_operational": 3, 00:17:12.235 "base_bdevs_list": [ 00:17:12.235 { 00:17:12.235 "name": "BaseBdev1", 00:17:12.235 "uuid": "ec1268f4-1c65-4bfa-8f09-33840973ba14", 00:17:12.235 "is_configured": true, 00:17:12.235 "data_offset": 0, 00:17:12.235 "data_size": 65536 00:17:12.235 }, 00:17:12.235 { 00:17:12.235 "name": null, 00:17:12.235 "uuid": "9d900f56-cbea-4db7-8162-d0f7361734e3", 00:17:12.235 "is_configured": false, 00:17:12.235 "data_offset": 0, 00:17:12.235 "data_size": 65536 00:17:12.235 }, 00:17:12.235 { 00:17:12.235 "name": "BaseBdev3", 00:17:12.235 "uuid": "d5a2b493-6829-497d-a824-d99c3b1f88de", 00:17:12.235 "is_configured": true, 00:17:12.235 "data_offset": 0, 00:17:12.235 "data_size": 65536 00:17:12.235 } 00:17:12.235 ] 00:17:12.235 }' 00:17:12.235 04:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:12.235 04:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.803 04:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:12.803 04:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.062 04:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:13.062 04:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:13.629 [2024-07-23 04:12:22.241877] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:13.629 04:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:13.629 04:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:13.629 04:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:13.629 04:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:13.629 04:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:13.629 04:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:13.629 04:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:13.629 04:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:13.629 04:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:13.629 04:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:13.629 04:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.629 04:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.888 04:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:13.888 "name": "Existed_Raid", 00:17:13.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.888 "strip_size_kb": 64, 00:17:13.888 "state": "configuring", 00:17:13.888 "raid_level": "raid0", 00:17:13.888 "superblock": false, 00:17:13.888 "num_base_bdevs": 3, 00:17:13.888 "num_base_bdevs_discovered": 1, 00:17:13.888 "num_base_bdevs_operational": 3, 00:17:13.888 "base_bdevs_list": [ 00:17:13.888 { 00:17:13.888 "name": null, 00:17:13.888 "uuid": "ec1268f4-1c65-4bfa-8f09-33840973ba14", 00:17:13.888 "is_configured": false, 00:17:13.888 "data_offset": 0, 00:17:13.888 "data_size": 65536 00:17:13.888 }, 00:17:13.888 { 00:17:13.888 "name": null, 00:17:13.888 "uuid": "9d900f56-cbea-4db7-8162-d0f7361734e3", 00:17:13.888 "is_configured": false, 00:17:13.888 "data_offset": 0, 00:17:13.888 "data_size": 65536 00:17:13.888 }, 00:17:13.888 { 00:17:13.888 "name": "BaseBdev3", 00:17:13.888 "uuid": "d5a2b493-6829-497d-a824-d99c3b1f88de", 00:17:13.888 "is_configured": true, 00:17:13.888 "data_offset": 0, 00:17:13.888 "data_size": 65536 00:17:13.888 } 00:17:13.888 ] 00:17:13.888 }' 00:17:13.888 04:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:13.888 04:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.454 04:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:14.454 04:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.718 04:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:14.718 04:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:15.284 [2024-07-23 04:12:23.855883] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:15.284 04:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:15.284 04:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:15.284 04:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:15.284 04:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:15.284 04:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:15.284 04:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:15.284 04:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:15.285 04:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:15.285 04:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:15.285 04:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:15.285 04:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.285 04:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.544 04:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:15.544 "name": "Existed_Raid", 00:17:15.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.544 "strip_size_kb": 64, 00:17:15.544 "state": "configuring", 00:17:15.544 "raid_level": "raid0", 00:17:15.544 "superblock": false, 00:17:15.544 "num_base_bdevs": 3, 00:17:15.544 "num_base_bdevs_discovered": 2, 00:17:15.544 "num_base_bdevs_operational": 3, 00:17:15.544 "base_bdevs_list": [ 00:17:15.544 { 00:17:15.544 "name": null, 00:17:15.544 "uuid": "ec1268f4-1c65-4bfa-8f09-33840973ba14", 00:17:15.544 "is_configured": false, 00:17:15.544 "data_offset": 0, 00:17:15.544 "data_size": 65536 00:17:15.544 }, 00:17:15.544 { 00:17:15.544 "name": "BaseBdev2", 00:17:15.544 "uuid": "9d900f56-cbea-4db7-8162-d0f7361734e3", 00:17:15.544 "is_configured": true, 00:17:15.544 "data_offset": 0, 00:17:15.544 "data_size": 65536 00:17:15.544 }, 00:17:15.544 { 00:17:15.544 "name": "BaseBdev3", 00:17:15.544 "uuid": "d5a2b493-6829-497d-a824-d99c3b1f88de", 00:17:15.544 "is_configured": true, 00:17:15.544 "data_offset": 0, 00:17:15.544 "data_size": 65536 00:17:15.544 } 00:17:15.544 ] 00:17:15.544 }' 00:17:15.544 04:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:15.544 04:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.115 04:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.115 04:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:16.115 04:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:16.374 04:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.374 04:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:16.374 04:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ec1268f4-1c65-4bfa-8f09-33840973ba14 00:17:16.633 [2024-07-23 04:12:25.385871] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:16.633 [2024-07-23 04:12:25.385917] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041780 00:17:16.633 [2024-07-23 04:12:25.385932] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:16.633 [2024-07-23 04:12:25.386248] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:17:16.633 [2024-07-23 04:12:25.386477] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041780 00:17:16.633 [2024-07-23 04:12:25.386491] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000041780 00:17:16.633 [2024-07-23 04:12:25.386785] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.633 NewBaseBdev 00:17:16.633 04:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:16.633 04:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:17:16.633 04:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:16.633 04:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:16.633 04:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:16.633 04:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:16.633 04:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:16.893 04:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:17.153 [ 00:17:17.153 { 00:17:17.153 "name": "NewBaseBdev", 00:17:17.153 "aliases": [ 00:17:17.153 "ec1268f4-1c65-4bfa-8f09-33840973ba14" 00:17:17.153 ], 00:17:17.153 "product_name": "Malloc disk", 00:17:17.153 "block_size": 512, 00:17:17.153 "num_blocks": 65536, 00:17:17.153 "uuid": "ec1268f4-1c65-4bfa-8f09-33840973ba14", 00:17:17.153 "assigned_rate_limits": { 00:17:17.153 "rw_ios_per_sec": 0, 00:17:17.153 "rw_mbytes_per_sec": 0, 00:17:17.153 "r_mbytes_per_sec": 0, 00:17:17.153 "w_mbytes_per_sec": 0 00:17:17.153 }, 00:17:17.153 "claimed": true, 00:17:17.153 "claim_type": "exclusive_write", 00:17:17.153 "zoned": false, 00:17:17.153 "supported_io_types": { 00:17:17.153 "read": true, 00:17:17.153 "write": true, 00:17:17.153 "unmap": true, 00:17:17.153 "flush": true, 00:17:17.153 "reset": true, 00:17:17.153 "nvme_admin": false, 00:17:17.153 "nvme_io": false, 00:17:17.153 "nvme_io_md": false, 00:17:17.153 "write_zeroes": true, 00:17:17.153 "zcopy": true, 00:17:17.153 "get_zone_info": false, 00:17:17.153 "zone_management": false, 00:17:17.153 "zone_append": false, 00:17:17.153 "compare": false, 00:17:17.153 "compare_and_write": false, 00:17:17.153 "abort": true, 00:17:17.153 "seek_hole": false, 00:17:17.153 "seek_data": false, 00:17:17.153 "copy": true, 00:17:17.153 "nvme_iov_md": false 00:17:17.153 }, 00:17:17.153 "memory_domains": [ 00:17:17.153 { 00:17:17.153 "dma_device_id": "system", 00:17:17.153 "dma_device_type": 1 00:17:17.153 }, 00:17:17.153 { 00:17:17.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.153 "dma_device_type": 2 00:17:17.153 } 00:17:17.153 ], 00:17:17.153 "driver_specific": {} 00:17:17.153 } 00:17:17.153 ] 00:17:17.153 04:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:17.153 04:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:17.153 04:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:17.153 04:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:17.153 04:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:17.153 04:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:17.153 04:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:17.153 04:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:17.153 04:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:17.153 04:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:17.153 04:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:17.153 04:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.153 04:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.412 04:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:17.412 "name": "Existed_Raid", 00:17:17.412 "uuid": "81bcaf49-c8d4-4e9a-88a6-41d8a13bceac", 00:17:17.412 "strip_size_kb": 64, 00:17:17.412 "state": "online", 00:17:17.412 "raid_level": "raid0", 00:17:17.412 "superblock": false, 00:17:17.412 "num_base_bdevs": 3, 00:17:17.412 "num_base_bdevs_discovered": 3, 00:17:17.412 "num_base_bdevs_operational": 3, 00:17:17.412 "base_bdevs_list": [ 00:17:17.412 { 00:17:17.412 "name": "NewBaseBdev", 00:17:17.412 "uuid": "ec1268f4-1c65-4bfa-8f09-33840973ba14", 00:17:17.412 "is_configured": true, 00:17:17.412 "data_offset": 0, 00:17:17.412 "data_size": 65536 00:17:17.412 }, 00:17:17.412 { 00:17:17.412 "name": "BaseBdev2", 00:17:17.412 "uuid": "9d900f56-cbea-4db7-8162-d0f7361734e3", 00:17:17.412 "is_configured": true, 00:17:17.412 "data_offset": 0, 00:17:17.412 "data_size": 65536 00:17:17.412 }, 00:17:17.412 { 00:17:17.412 "name": "BaseBdev3", 00:17:17.412 "uuid": "d5a2b493-6829-497d-a824-d99c3b1f88de", 00:17:17.412 "is_configured": true, 00:17:17.412 "data_offset": 0, 00:17:17.412 "data_size": 65536 00:17:17.412 } 00:17:17.412 ] 00:17:17.412 }' 00:17:17.412 04:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:17.412 04:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.981 04:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:17.981 04:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:17.981 04:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:17.981 04:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:17.981 04:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:17.981 04:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:17.981 04:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:17.981 04:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:18.241 [2024-07-23 04:12:26.818336] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:18.241 04:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:18.241 "name": "Existed_Raid", 00:17:18.241 "aliases": [ 00:17:18.241 "81bcaf49-c8d4-4e9a-88a6-41d8a13bceac" 00:17:18.241 ], 00:17:18.241 "product_name": "Raid Volume", 00:17:18.241 "block_size": 512, 00:17:18.241 "num_blocks": 196608, 00:17:18.241 "uuid": "81bcaf49-c8d4-4e9a-88a6-41d8a13bceac", 00:17:18.241 "assigned_rate_limits": { 00:17:18.241 "rw_ios_per_sec": 0, 00:17:18.241 "rw_mbytes_per_sec": 0, 00:17:18.241 "r_mbytes_per_sec": 0, 00:17:18.241 "w_mbytes_per_sec": 0 00:17:18.241 }, 00:17:18.241 "claimed": false, 00:17:18.241 "zoned": false, 00:17:18.241 "supported_io_types": { 00:17:18.241 "read": true, 00:17:18.241 "write": true, 00:17:18.241 "unmap": true, 00:17:18.241 "flush": true, 00:17:18.241 "reset": true, 00:17:18.241 "nvme_admin": false, 00:17:18.241 "nvme_io": false, 00:17:18.241 "nvme_io_md": false, 00:17:18.241 "write_zeroes": true, 00:17:18.241 "zcopy": false, 00:17:18.241 "get_zone_info": false, 00:17:18.241 "zone_management": false, 00:17:18.241 "zone_append": false, 00:17:18.241 "compare": false, 00:17:18.241 "compare_and_write": false, 00:17:18.241 "abort": false, 00:17:18.241 "seek_hole": false, 00:17:18.241 "seek_data": false, 00:17:18.241 "copy": false, 00:17:18.241 "nvme_iov_md": false 00:17:18.241 }, 00:17:18.241 "memory_domains": [ 00:17:18.241 { 00:17:18.241 "dma_device_id": "system", 00:17:18.241 "dma_device_type": 1 00:17:18.241 }, 00:17:18.241 { 00:17:18.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.241 "dma_device_type": 2 00:17:18.241 }, 00:17:18.241 { 00:17:18.241 "dma_device_id": "system", 00:17:18.241 "dma_device_type": 1 00:17:18.241 }, 00:17:18.241 { 00:17:18.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.241 "dma_device_type": 2 00:17:18.241 }, 00:17:18.241 { 00:17:18.241 "dma_device_id": "system", 00:17:18.241 "dma_device_type": 1 00:17:18.241 }, 00:17:18.241 { 00:17:18.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.241 "dma_device_type": 2 00:17:18.241 } 00:17:18.241 ], 00:17:18.241 "driver_specific": { 00:17:18.241 "raid": { 00:17:18.241 "uuid": "81bcaf49-c8d4-4e9a-88a6-41d8a13bceac", 00:17:18.241 "strip_size_kb": 64, 00:17:18.241 "state": "online", 00:17:18.241 "raid_level": "raid0", 00:17:18.241 "superblock": false, 00:17:18.241 "num_base_bdevs": 3, 00:17:18.241 "num_base_bdevs_discovered": 3, 00:17:18.241 "num_base_bdevs_operational": 3, 00:17:18.241 "base_bdevs_list": [ 00:17:18.241 { 00:17:18.241 "name": "NewBaseBdev", 00:17:18.241 "uuid": "ec1268f4-1c65-4bfa-8f09-33840973ba14", 00:17:18.241 "is_configured": true, 00:17:18.241 "data_offset": 0, 00:17:18.241 "data_size": 65536 00:17:18.241 }, 00:17:18.241 { 00:17:18.241 "name": "BaseBdev2", 00:17:18.241 "uuid": "9d900f56-cbea-4db7-8162-d0f7361734e3", 00:17:18.241 "is_configured": true, 00:17:18.241 "data_offset": 0, 00:17:18.241 "data_size": 65536 00:17:18.241 }, 00:17:18.241 { 00:17:18.241 "name": "BaseBdev3", 00:17:18.241 "uuid": "d5a2b493-6829-497d-a824-d99c3b1f88de", 00:17:18.241 "is_configured": true, 00:17:18.241 "data_offset": 0, 00:17:18.241 "data_size": 65536 00:17:18.241 } 00:17:18.241 ] 00:17:18.241 } 00:17:18.241 } 00:17:18.241 }' 00:17:18.241 04:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:18.241 04:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:18.241 BaseBdev2 00:17:18.241 BaseBdev3' 00:17:18.241 04:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:18.241 04:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:18.241 04:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:18.500 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:18.500 "name": "NewBaseBdev", 00:17:18.500 "aliases": [ 00:17:18.500 "ec1268f4-1c65-4bfa-8f09-33840973ba14" 00:17:18.500 ], 00:17:18.500 "product_name": "Malloc disk", 00:17:18.500 "block_size": 512, 00:17:18.500 "num_blocks": 65536, 00:17:18.500 "uuid": "ec1268f4-1c65-4bfa-8f09-33840973ba14", 00:17:18.500 "assigned_rate_limits": { 00:17:18.500 "rw_ios_per_sec": 0, 00:17:18.500 "rw_mbytes_per_sec": 0, 00:17:18.500 "r_mbytes_per_sec": 0, 00:17:18.500 "w_mbytes_per_sec": 0 00:17:18.500 }, 00:17:18.500 "claimed": true, 00:17:18.500 "claim_type": "exclusive_write", 00:17:18.500 "zoned": false, 00:17:18.500 "supported_io_types": { 00:17:18.500 "read": true, 00:17:18.500 "write": true, 00:17:18.500 "unmap": true, 00:17:18.500 "flush": true, 00:17:18.500 "reset": true, 00:17:18.500 "nvme_admin": false, 00:17:18.500 "nvme_io": false, 00:17:18.500 "nvme_io_md": false, 00:17:18.500 "write_zeroes": true, 00:17:18.500 "zcopy": true, 00:17:18.500 "get_zone_info": false, 00:17:18.500 "zone_management": false, 00:17:18.500 "zone_append": false, 00:17:18.500 "compare": false, 00:17:18.500 "compare_and_write": false, 00:17:18.500 "abort": true, 00:17:18.500 "seek_hole": false, 00:17:18.500 "seek_data": false, 00:17:18.500 "copy": true, 00:17:18.500 "nvme_iov_md": false 00:17:18.500 }, 00:17:18.500 "memory_domains": [ 00:17:18.500 { 00:17:18.500 "dma_device_id": "system", 00:17:18.500 "dma_device_type": 1 00:17:18.500 }, 00:17:18.500 { 00:17:18.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.500 "dma_device_type": 2 00:17:18.500 } 00:17:18.500 ], 00:17:18.500 "driver_specific": {} 00:17:18.500 }' 00:17:18.500 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:18.500 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:18.500 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:18.500 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:18.500 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:18.760 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:18.760 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:18.760 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:18.760 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:18.760 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:18.760 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:18.760 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:18.760 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:18.760 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:18.760 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:19.019 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:19.019 "name": "BaseBdev2", 00:17:19.019 "aliases": [ 00:17:19.019 "9d900f56-cbea-4db7-8162-d0f7361734e3" 00:17:19.019 ], 00:17:19.019 "product_name": "Malloc disk", 00:17:19.019 "block_size": 512, 00:17:19.019 "num_blocks": 65536, 00:17:19.019 "uuid": "9d900f56-cbea-4db7-8162-d0f7361734e3", 00:17:19.019 "assigned_rate_limits": { 00:17:19.019 "rw_ios_per_sec": 0, 00:17:19.019 "rw_mbytes_per_sec": 0, 00:17:19.019 "r_mbytes_per_sec": 0, 00:17:19.019 "w_mbytes_per_sec": 0 00:17:19.019 }, 00:17:19.019 "claimed": true, 00:17:19.019 "claim_type": "exclusive_write", 00:17:19.019 "zoned": false, 00:17:19.019 "supported_io_types": { 00:17:19.019 "read": true, 00:17:19.019 "write": true, 00:17:19.019 "unmap": true, 00:17:19.019 "flush": true, 00:17:19.019 "reset": true, 00:17:19.019 "nvme_admin": false, 00:17:19.019 "nvme_io": false, 00:17:19.019 "nvme_io_md": false, 00:17:19.019 "write_zeroes": true, 00:17:19.019 "zcopy": true, 00:17:19.019 "get_zone_info": false, 00:17:19.019 "zone_management": false, 00:17:19.019 "zone_append": false, 00:17:19.019 "compare": false, 00:17:19.019 "compare_and_write": false, 00:17:19.019 "abort": true, 00:17:19.019 "seek_hole": false, 00:17:19.019 "seek_data": false, 00:17:19.019 "copy": true, 00:17:19.019 "nvme_iov_md": false 00:17:19.019 }, 00:17:19.019 "memory_domains": [ 00:17:19.019 { 00:17:19.019 "dma_device_id": "system", 00:17:19.019 "dma_device_type": 1 00:17:19.019 }, 00:17:19.019 { 00:17:19.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.019 "dma_device_type": 2 00:17:19.019 } 00:17:19.019 ], 00:17:19.019 "driver_specific": {} 00:17:19.019 }' 00:17:19.019 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:19.019 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:19.019 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:19.019 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:19.278 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:19.279 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:19.279 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:19.279 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:19.279 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:19.279 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:19.279 04:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:19.279 04:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:19.279 04:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:19.279 04:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:19.279 04:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:19.538 04:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:19.538 "name": "BaseBdev3", 00:17:19.538 "aliases": [ 00:17:19.538 "d5a2b493-6829-497d-a824-d99c3b1f88de" 00:17:19.538 ], 00:17:19.538 "product_name": "Malloc disk", 00:17:19.538 "block_size": 512, 00:17:19.538 "num_blocks": 65536, 00:17:19.538 "uuid": "d5a2b493-6829-497d-a824-d99c3b1f88de", 00:17:19.538 "assigned_rate_limits": { 00:17:19.538 "rw_ios_per_sec": 0, 00:17:19.538 "rw_mbytes_per_sec": 0, 00:17:19.538 "r_mbytes_per_sec": 0, 00:17:19.538 "w_mbytes_per_sec": 0 00:17:19.538 }, 00:17:19.538 "claimed": true, 00:17:19.538 "claim_type": "exclusive_write", 00:17:19.538 "zoned": false, 00:17:19.538 "supported_io_types": { 00:17:19.538 "read": true, 00:17:19.538 "write": true, 00:17:19.538 "unmap": true, 00:17:19.538 "flush": true, 00:17:19.538 "reset": true, 00:17:19.538 "nvme_admin": false, 00:17:19.538 "nvme_io": false, 00:17:19.538 "nvme_io_md": false, 00:17:19.538 "write_zeroes": true, 00:17:19.538 "zcopy": true, 00:17:19.538 "get_zone_info": false, 00:17:19.538 "zone_management": false, 00:17:19.538 "zone_append": false, 00:17:19.538 "compare": false, 00:17:19.538 "compare_and_write": false, 00:17:19.538 "abort": true, 00:17:19.538 "seek_hole": false, 00:17:19.538 "seek_data": false, 00:17:19.538 "copy": true, 00:17:19.538 "nvme_iov_md": false 00:17:19.538 }, 00:17:19.538 "memory_domains": [ 00:17:19.538 { 00:17:19.538 "dma_device_id": "system", 00:17:19.538 "dma_device_type": 1 00:17:19.538 }, 00:17:19.538 { 00:17:19.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.538 "dma_device_type": 2 00:17:19.538 } 00:17:19.538 ], 00:17:19.538 "driver_specific": {} 00:17:19.538 }' 00:17:19.538 04:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:19.538 04:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:19.798 04:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:19.798 04:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:19.798 04:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:19.798 04:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:19.798 04:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:19.798 04:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:19.798 04:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:19.798 04:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:19.798 04:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:20.057 04:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:20.057 04:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:20.057 [2024-07-23 04:12:28.815344] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:20.057 [2024-07-23 04:12:28.815377] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:20.057 [2024-07-23 04:12:28.815461] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:20.057 [2024-07-23 04:12:28.815524] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:20.057 [2024-07-23 04:12:28.815546] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041780 name Existed_Raid, state offline 00:17:20.057 04:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2644000 00:17:20.057 04:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2644000 ']' 00:17:20.057 04:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2644000 00:17:20.057 04:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:17:20.057 04:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:20.317 04:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2644000 00:17:20.317 04:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:20.317 04:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:20.317 04:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2644000' 00:17:20.317 killing process with pid 2644000 00:17:20.317 04:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2644000 00:17:20.317 [2024-07-23 04:12:28.892054] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:20.317 04:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2644000 00:17:20.576 [2024-07-23 04:12:29.206605] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:17:22.485 00:17:22.485 real 0m29.790s 00:17:22.485 user 0m52.113s 00:17:22.485 sys 0m5.234s 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.485 ************************************ 00:17:22.485 END TEST raid_state_function_test 00:17:22.485 ************************************ 00:17:22.485 04:12:30 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:22.485 04:12:30 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:17:22.485 04:12:30 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:22.485 04:12:30 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:22.485 04:12:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.485 ************************************ 00:17:22.485 START TEST raid_state_function_test_sb 00:17:22.485 ************************************ 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 true 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2650080 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2650080' 00:17:22.485 Process raid pid: 2650080 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2650080 /var/tmp/spdk-raid.sock 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2650080 ']' 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:22.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.485 04:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.485 [2024-07-23 04:12:31.059547] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:22.485 [2024-07-23 04:12:31.059659] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.485 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:22.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.485 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:22.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.485 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:22.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.485 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:22.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.485 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:22.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.485 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:22.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.485 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:22.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.485 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:22.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.485 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:22.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.485 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:22.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.485 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:22.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.485 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:22.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.485 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:22.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.485 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:22.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.485 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:22.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.485 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:22.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.486 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:22.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.486 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:22.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.486 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:22.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.486 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:22.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.486 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:22.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.486 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:22.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.486 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:22.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.486 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:22.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.486 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:22.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.486 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:22.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.486 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:22.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.486 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:22.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.486 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:22.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.486 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:22.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.486 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:22.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:22.486 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:22.746 [2024-07-23 04:12:31.288625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.005 [2024-07-23 04:12:31.566236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.265 [2024-07-23 04:12:31.889343] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.265 [2024-07-23 04:12:31.889380] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.524 04:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.524 04:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:17:23.524 04:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:23.524 [2024-07-23 04:12:32.273556] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:23.524 [2024-07-23 04:12:32.273613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:23.524 [2024-07-23 04:12:32.273628] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:23.524 [2024-07-23 04:12:32.273644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:23.524 [2024-07-23 04:12:32.273656] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:23.524 [2024-07-23 04:12:32.273671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:23.524 04:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:23.524 04:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:23.524 04:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:23.524 04:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:23.524 04:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:23.524 04:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:23.524 04:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:23.524 04:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:23.524 04:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:23.524 04:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:23.524 04:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.524 04:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.782 04:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:23.782 "name": "Existed_Raid", 00:17:23.783 "uuid": "a485c7bd-7168-45ca-8167-f9d215175c1a", 00:17:23.783 "strip_size_kb": 64, 00:17:23.783 "state": "configuring", 00:17:23.783 "raid_level": "raid0", 00:17:23.783 "superblock": true, 00:17:23.783 "num_base_bdevs": 3, 00:17:23.783 "num_base_bdevs_discovered": 0, 00:17:23.783 "num_base_bdevs_operational": 3, 00:17:23.783 "base_bdevs_list": [ 00:17:23.783 { 00:17:23.783 "name": "BaseBdev1", 00:17:23.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.783 "is_configured": false, 00:17:23.783 "data_offset": 0, 00:17:23.783 "data_size": 0 00:17:23.783 }, 00:17:23.783 { 00:17:23.783 "name": "BaseBdev2", 00:17:23.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.783 "is_configured": false, 00:17:23.783 "data_offset": 0, 00:17:23.783 "data_size": 0 00:17:23.783 }, 00:17:23.783 { 00:17:23.783 "name": "BaseBdev3", 00:17:23.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.783 "is_configured": false, 00:17:23.783 "data_offset": 0, 00:17:23.783 "data_size": 0 00:17:23.783 } 00:17:23.783 ] 00:17:23.783 }' 00:17:23.783 04:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:23.783 04:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.351 04:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:24.610 [2024-07-23 04:12:33.300145] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.610 [2024-07-23 04:12:33.300187] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:17:24.610 04:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:24.869 [2024-07-23 04:12:33.524822] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:24.869 [2024-07-23 04:12:33.524868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:24.869 [2024-07-23 04:12:33.524881] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.869 [2024-07-23 04:12:33.524901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.869 [2024-07-23 04:12:33.524912] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:24.869 [2024-07-23 04:12:33.524928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:24.869 04:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:25.129 [2024-07-23 04:12:33.804799] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.129 BaseBdev1 00:17:25.129 04:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:25.129 04:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:25.129 04:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:25.129 04:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:25.129 04:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:25.129 04:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:25.129 04:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:25.388 04:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:25.647 [ 00:17:25.647 { 00:17:25.647 "name": "BaseBdev1", 00:17:25.647 "aliases": [ 00:17:25.647 "344e168f-1c27-462e-a0c3-3eb5824beb92" 00:17:25.647 ], 00:17:25.647 "product_name": "Malloc disk", 00:17:25.647 "block_size": 512, 00:17:25.647 "num_blocks": 65536, 00:17:25.647 "uuid": "344e168f-1c27-462e-a0c3-3eb5824beb92", 00:17:25.647 "assigned_rate_limits": { 00:17:25.647 "rw_ios_per_sec": 0, 00:17:25.647 "rw_mbytes_per_sec": 0, 00:17:25.647 "r_mbytes_per_sec": 0, 00:17:25.647 "w_mbytes_per_sec": 0 00:17:25.647 }, 00:17:25.647 "claimed": true, 00:17:25.647 "claim_type": "exclusive_write", 00:17:25.647 "zoned": false, 00:17:25.647 "supported_io_types": { 00:17:25.647 "read": true, 00:17:25.647 "write": true, 00:17:25.647 "unmap": true, 00:17:25.647 "flush": true, 00:17:25.647 "reset": true, 00:17:25.647 "nvme_admin": false, 00:17:25.647 "nvme_io": false, 00:17:25.647 "nvme_io_md": false, 00:17:25.647 "write_zeroes": true, 00:17:25.647 "zcopy": true, 00:17:25.647 "get_zone_info": false, 00:17:25.647 "zone_management": false, 00:17:25.647 "zone_append": false, 00:17:25.647 "compare": false, 00:17:25.647 "compare_and_write": false, 00:17:25.647 "abort": true, 00:17:25.647 "seek_hole": false, 00:17:25.647 "seek_data": false, 00:17:25.647 "copy": true, 00:17:25.647 "nvme_iov_md": false 00:17:25.647 }, 00:17:25.647 "memory_domains": [ 00:17:25.647 { 00:17:25.647 "dma_device_id": "system", 00:17:25.647 "dma_device_type": 1 00:17:25.647 }, 00:17:25.647 { 00:17:25.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.648 "dma_device_type": 2 00:17:25.648 } 00:17:25.648 ], 00:17:25.648 "driver_specific": {} 00:17:25.648 } 00:17:25.648 ] 00:17:25.648 04:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:25.648 04:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:25.648 04:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:25.648 04:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:25.648 04:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:25.648 04:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:25.648 04:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:25.648 04:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:25.648 04:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:25.648 04:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:25.648 04:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:25.648 04:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.648 04:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.907 04:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:25.907 "name": "Existed_Raid", 00:17:25.907 "uuid": "6ee0c995-f99b-4775-97e7-627fe6536bbd", 00:17:25.907 "strip_size_kb": 64, 00:17:25.907 "state": "configuring", 00:17:25.907 "raid_level": "raid0", 00:17:25.907 "superblock": true, 00:17:25.907 "num_base_bdevs": 3, 00:17:25.907 "num_base_bdevs_discovered": 1, 00:17:25.907 "num_base_bdevs_operational": 3, 00:17:25.907 "base_bdevs_list": [ 00:17:25.907 { 00:17:25.907 "name": "BaseBdev1", 00:17:25.907 "uuid": "344e168f-1c27-462e-a0c3-3eb5824beb92", 00:17:25.907 "is_configured": true, 00:17:25.907 "data_offset": 2048, 00:17:25.907 "data_size": 63488 00:17:25.907 }, 00:17:25.907 { 00:17:25.907 "name": "BaseBdev2", 00:17:25.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.907 "is_configured": false, 00:17:25.907 "data_offset": 0, 00:17:25.907 "data_size": 0 00:17:25.907 }, 00:17:25.907 { 00:17:25.907 "name": "BaseBdev3", 00:17:25.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.907 "is_configured": false, 00:17:25.907 "data_offset": 0, 00:17:25.907 "data_size": 0 00:17:25.907 } 00:17:25.907 ] 00:17:25.907 }' 00:17:25.907 04:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:25.907 04:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.475 04:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:26.735 [2024-07-23 04:12:35.297018] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:26.735 [2024-07-23 04:12:35.297075] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:17:26.735 04:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:26.995 [2024-07-23 04:12:35.521734] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:26.995 [2024-07-23 04:12:35.524028] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:26.995 [2024-07-23 04:12:35.524072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:26.995 [2024-07-23 04:12:35.524087] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:26.995 [2024-07-23 04:12:35.524103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:26.995 04:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:26.995 04:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:26.995 04:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:26.995 04:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:26.995 04:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:26.995 04:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:26.995 04:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:26.995 04:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:26.995 04:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:26.995 04:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:26.995 04:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:26.995 04:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:26.995 04:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.995 04:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.995 04:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:26.995 "name": "Existed_Raid", 00:17:26.995 "uuid": "40fc3c17-2354-4342-8842-f1c2b0815fd3", 00:17:26.995 "strip_size_kb": 64, 00:17:26.995 "state": "configuring", 00:17:26.995 "raid_level": "raid0", 00:17:26.995 "superblock": true, 00:17:26.995 "num_base_bdevs": 3, 00:17:26.995 "num_base_bdevs_discovered": 1, 00:17:26.995 "num_base_bdevs_operational": 3, 00:17:26.995 "base_bdevs_list": [ 00:17:26.995 { 00:17:26.995 "name": "BaseBdev1", 00:17:26.995 "uuid": "344e168f-1c27-462e-a0c3-3eb5824beb92", 00:17:26.995 "is_configured": true, 00:17:26.995 "data_offset": 2048, 00:17:26.995 "data_size": 63488 00:17:26.995 }, 00:17:26.995 { 00:17:26.995 "name": "BaseBdev2", 00:17:26.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.995 "is_configured": false, 00:17:26.995 "data_offset": 0, 00:17:26.995 "data_size": 0 00:17:26.995 }, 00:17:26.995 { 00:17:26.995 "name": "BaseBdev3", 00:17:26.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.995 "is_configured": false, 00:17:26.995 "data_offset": 0, 00:17:26.995 "data_size": 0 00:17:26.995 } 00:17:26.995 ] 00:17:26.995 }' 00:17:26.995 04:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:26.995 04:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.563 04:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:27.823 [2024-07-23 04:12:36.601271] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:27.823 BaseBdev2 00:17:28.082 04:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:28.082 04:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:28.082 04:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:28.082 04:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:28.082 04:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:28.082 04:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:28.082 04:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:28.082 04:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:28.343 [ 00:17:28.343 { 00:17:28.343 "name": "BaseBdev2", 00:17:28.343 "aliases": [ 00:17:28.343 "b11ee1d8-f92d-4e87-9c9e-affeb2817a97" 00:17:28.343 ], 00:17:28.343 "product_name": "Malloc disk", 00:17:28.343 "block_size": 512, 00:17:28.343 "num_blocks": 65536, 00:17:28.343 "uuid": "b11ee1d8-f92d-4e87-9c9e-affeb2817a97", 00:17:28.343 "assigned_rate_limits": { 00:17:28.343 "rw_ios_per_sec": 0, 00:17:28.343 "rw_mbytes_per_sec": 0, 00:17:28.343 "r_mbytes_per_sec": 0, 00:17:28.343 "w_mbytes_per_sec": 0 00:17:28.343 }, 00:17:28.343 "claimed": true, 00:17:28.343 "claim_type": "exclusive_write", 00:17:28.343 "zoned": false, 00:17:28.343 "supported_io_types": { 00:17:28.343 "read": true, 00:17:28.343 "write": true, 00:17:28.343 "unmap": true, 00:17:28.343 "flush": true, 00:17:28.343 "reset": true, 00:17:28.343 "nvme_admin": false, 00:17:28.343 "nvme_io": false, 00:17:28.343 "nvme_io_md": false, 00:17:28.343 "write_zeroes": true, 00:17:28.343 "zcopy": true, 00:17:28.343 "get_zone_info": false, 00:17:28.343 "zone_management": false, 00:17:28.343 "zone_append": false, 00:17:28.343 "compare": false, 00:17:28.343 "compare_and_write": false, 00:17:28.343 "abort": true, 00:17:28.343 "seek_hole": false, 00:17:28.343 "seek_data": false, 00:17:28.343 "copy": true, 00:17:28.343 "nvme_iov_md": false 00:17:28.343 }, 00:17:28.343 "memory_domains": [ 00:17:28.343 { 00:17:28.343 "dma_device_id": "system", 00:17:28.343 "dma_device_type": 1 00:17:28.343 }, 00:17:28.343 { 00:17:28.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.343 "dma_device_type": 2 00:17:28.343 } 00:17:28.343 ], 00:17:28.343 "driver_specific": {} 00:17:28.343 } 00:17:28.343 ] 00:17:28.343 04:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:28.343 04:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:28.343 04:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:28.343 04:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:28.343 04:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:28.343 04:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:28.343 04:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:28.343 04:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:28.343 04:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:28.343 04:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:28.343 04:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:28.343 04:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:28.343 04:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:28.343 04:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.343 04:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.644 04:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:28.644 "name": "Existed_Raid", 00:17:28.644 "uuid": "40fc3c17-2354-4342-8842-f1c2b0815fd3", 00:17:28.644 "strip_size_kb": 64, 00:17:28.644 "state": "configuring", 00:17:28.644 "raid_level": "raid0", 00:17:28.644 "superblock": true, 00:17:28.644 "num_base_bdevs": 3, 00:17:28.644 "num_base_bdevs_discovered": 2, 00:17:28.644 "num_base_bdevs_operational": 3, 00:17:28.644 "base_bdevs_list": [ 00:17:28.644 { 00:17:28.644 "name": "BaseBdev1", 00:17:28.644 "uuid": "344e168f-1c27-462e-a0c3-3eb5824beb92", 00:17:28.644 "is_configured": true, 00:17:28.644 "data_offset": 2048, 00:17:28.644 "data_size": 63488 00:17:28.644 }, 00:17:28.644 { 00:17:28.644 "name": "BaseBdev2", 00:17:28.644 "uuid": "b11ee1d8-f92d-4e87-9c9e-affeb2817a97", 00:17:28.644 "is_configured": true, 00:17:28.644 "data_offset": 2048, 00:17:28.644 "data_size": 63488 00:17:28.644 }, 00:17:28.644 { 00:17:28.644 "name": "BaseBdev3", 00:17:28.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.644 "is_configured": false, 00:17:28.644 "data_offset": 0, 00:17:28.644 "data_size": 0 00:17:28.644 } 00:17:28.644 ] 00:17:28.644 }' 00:17:28.644 04:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:28.644 04:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.212 04:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:29.472 [2024-07-23 04:12:38.126639] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:29.472 [2024-07-23 04:12:38.126913] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:17:29.472 [2024-07-23 04:12:38.126937] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:29.472 [2024-07-23 04:12:38.127262] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:17:29.472 [2024-07-23 04:12:38.127507] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:17:29.472 [2024-07-23 04:12:38.127522] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:17:29.472 [2024-07-23 04:12:38.127709] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.472 BaseBdev3 00:17:29.472 04:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:29.472 04:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:29.472 04:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:29.472 04:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:29.472 04:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:29.472 04:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:29.472 04:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:29.731 04:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:29.991 [ 00:17:29.991 { 00:17:29.991 "name": "BaseBdev3", 00:17:29.991 "aliases": [ 00:17:29.991 "07da442b-e76a-4126-bedb-c2c09b603b85" 00:17:29.991 ], 00:17:29.991 "product_name": "Malloc disk", 00:17:29.991 "block_size": 512, 00:17:29.991 "num_blocks": 65536, 00:17:29.991 "uuid": "07da442b-e76a-4126-bedb-c2c09b603b85", 00:17:29.991 "assigned_rate_limits": { 00:17:29.991 "rw_ios_per_sec": 0, 00:17:29.991 "rw_mbytes_per_sec": 0, 00:17:29.991 "r_mbytes_per_sec": 0, 00:17:29.991 "w_mbytes_per_sec": 0 00:17:29.991 }, 00:17:29.991 "claimed": true, 00:17:29.991 "claim_type": "exclusive_write", 00:17:29.991 "zoned": false, 00:17:29.991 "supported_io_types": { 00:17:29.991 "read": true, 00:17:29.991 "write": true, 00:17:29.991 "unmap": true, 00:17:29.991 "flush": true, 00:17:29.991 "reset": true, 00:17:29.991 "nvme_admin": false, 00:17:29.991 "nvme_io": false, 00:17:29.991 "nvme_io_md": false, 00:17:29.991 "write_zeroes": true, 00:17:29.991 "zcopy": true, 00:17:29.991 "get_zone_info": false, 00:17:29.991 "zone_management": false, 00:17:29.991 "zone_append": false, 00:17:29.991 "compare": false, 00:17:29.991 "compare_and_write": false, 00:17:29.991 "abort": true, 00:17:29.991 "seek_hole": false, 00:17:29.991 "seek_data": false, 00:17:29.991 "copy": true, 00:17:29.991 "nvme_iov_md": false 00:17:29.991 }, 00:17:29.991 "memory_domains": [ 00:17:29.991 { 00:17:29.991 "dma_device_id": "system", 00:17:29.991 "dma_device_type": 1 00:17:29.991 }, 00:17:29.991 { 00:17:29.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.991 "dma_device_type": 2 00:17:29.991 } 00:17:29.991 ], 00:17:29.991 "driver_specific": {} 00:17:29.991 } 00:17:29.991 ] 00:17:29.991 04:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:29.991 04:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:29.991 04:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:29.991 04:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:29.991 04:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:29.991 04:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:29.991 04:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:29.991 04:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:29.991 04:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:29.991 04:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:29.991 04:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:29.991 04:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:29.991 04:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:29.991 04:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.991 04:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.250 04:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:30.250 "name": "Existed_Raid", 00:17:30.250 "uuid": "40fc3c17-2354-4342-8842-f1c2b0815fd3", 00:17:30.250 "strip_size_kb": 64, 00:17:30.250 "state": "online", 00:17:30.250 "raid_level": "raid0", 00:17:30.250 "superblock": true, 00:17:30.250 "num_base_bdevs": 3, 00:17:30.250 "num_base_bdevs_discovered": 3, 00:17:30.250 "num_base_bdevs_operational": 3, 00:17:30.250 "base_bdevs_list": [ 00:17:30.250 { 00:17:30.250 "name": "BaseBdev1", 00:17:30.250 "uuid": "344e168f-1c27-462e-a0c3-3eb5824beb92", 00:17:30.250 "is_configured": true, 00:17:30.250 "data_offset": 2048, 00:17:30.250 "data_size": 63488 00:17:30.250 }, 00:17:30.250 { 00:17:30.250 "name": "BaseBdev2", 00:17:30.250 "uuid": "b11ee1d8-f92d-4e87-9c9e-affeb2817a97", 00:17:30.250 "is_configured": true, 00:17:30.250 "data_offset": 2048, 00:17:30.250 "data_size": 63488 00:17:30.250 }, 00:17:30.250 { 00:17:30.250 "name": "BaseBdev3", 00:17:30.250 "uuid": "07da442b-e76a-4126-bedb-c2c09b603b85", 00:17:30.250 "is_configured": true, 00:17:30.250 "data_offset": 2048, 00:17:30.250 "data_size": 63488 00:17:30.250 } 00:17:30.250 ] 00:17:30.250 }' 00:17:30.250 04:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:30.250 04:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.819 04:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:30.819 04:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:30.819 04:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:30.819 04:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:30.819 04:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:30.819 04:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:30.819 04:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:30.819 04:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:30.819 [2024-07-23 04:12:39.595041] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.078 04:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:31.078 "name": "Existed_Raid", 00:17:31.078 "aliases": [ 00:17:31.078 "40fc3c17-2354-4342-8842-f1c2b0815fd3" 00:17:31.078 ], 00:17:31.078 "product_name": "Raid Volume", 00:17:31.078 "block_size": 512, 00:17:31.078 "num_blocks": 190464, 00:17:31.078 "uuid": "40fc3c17-2354-4342-8842-f1c2b0815fd3", 00:17:31.078 "assigned_rate_limits": { 00:17:31.078 "rw_ios_per_sec": 0, 00:17:31.078 "rw_mbytes_per_sec": 0, 00:17:31.078 "r_mbytes_per_sec": 0, 00:17:31.078 "w_mbytes_per_sec": 0 00:17:31.078 }, 00:17:31.078 "claimed": false, 00:17:31.078 "zoned": false, 00:17:31.078 "supported_io_types": { 00:17:31.078 "read": true, 00:17:31.078 "write": true, 00:17:31.078 "unmap": true, 00:17:31.078 "flush": true, 00:17:31.078 "reset": true, 00:17:31.078 "nvme_admin": false, 00:17:31.078 "nvme_io": false, 00:17:31.078 "nvme_io_md": false, 00:17:31.078 "write_zeroes": true, 00:17:31.078 "zcopy": false, 00:17:31.078 "get_zone_info": false, 00:17:31.078 "zone_management": false, 00:17:31.078 "zone_append": false, 00:17:31.078 "compare": false, 00:17:31.078 "compare_and_write": false, 00:17:31.078 "abort": false, 00:17:31.078 "seek_hole": false, 00:17:31.078 "seek_data": false, 00:17:31.078 "copy": false, 00:17:31.078 "nvme_iov_md": false 00:17:31.078 }, 00:17:31.078 "memory_domains": [ 00:17:31.078 { 00:17:31.078 "dma_device_id": "system", 00:17:31.078 "dma_device_type": 1 00:17:31.078 }, 00:17:31.078 { 00:17:31.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.078 "dma_device_type": 2 00:17:31.078 }, 00:17:31.078 { 00:17:31.078 "dma_device_id": "system", 00:17:31.078 "dma_device_type": 1 00:17:31.078 }, 00:17:31.078 { 00:17:31.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.078 "dma_device_type": 2 00:17:31.078 }, 00:17:31.078 { 00:17:31.078 "dma_device_id": "system", 00:17:31.078 "dma_device_type": 1 00:17:31.078 }, 00:17:31.078 { 00:17:31.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.078 "dma_device_type": 2 00:17:31.078 } 00:17:31.078 ], 00:17:31.078 "driver_specific": { 00:17:31.078 "raid": { 00:17:31.078 "uuid": "40fc3c17-2354-4342-8842-f1c2b0815fd3", 00:17:31.079 "strip_size_kb": 64, 00:17:31.079 "state": "online", 00:17:31.079 "raid_level": "raid0", 00:17:31.079 "superblock": true, 00:17:31.079 "num_base_bdevs": 3, 00:17:31.079 "num_base_bdevs_discovered": 3, 00:17:31.079 "num_base_bdevs_operational": 3, 00:17:31.079 "base_bdevs_list": [ 00:17:31.079 { 00:17:31.079 "name": "BaseBdev1", 00:17:31.079 "uuid": "344e168f-1c27-462e-a0c3-3eb5824beb92", 00:17:31.079 "is_configured": true, 00:17:31.079 "data_offset": 2048, 00:17:31.079 "data_size": 63488 00:17:31.079 }, 00:17:31.079 { 00:17:31.079 "name": "BaseBdev2", 00:17:31.079 "uuid": "b11ee1d8-f92d-4e87-9c9e-affeb2817a97", 00:17:31.079 "is_configured": true, 00:17:31.079 "data_offset": 2048, 00:17:31.079 "data_size": 63488 00:17:31.079 }, 00:17:31.079 { 00:17:31.079 "name": "BaseBdev3", 00:17:31.079 "uuid": "07da442b-e76a-4126-bedb-c2c09b603b85", 00:17:31.079 "is_configured": true, 00:17:31.079 "data_offset": 2048, 00:17:31.079 "data_size": 63488 00:17:31.079 } 00:17:31.079 ] 00:17:31.079 } 00:17:31.079 } 00:17:31.079 }' 00:17:31.079 04:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:31.079 04:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:31.079 BaseBdev2 00:17:31.079 BaseBdev3' 00:17:31.079 04:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:31.079 04:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:31.079 04:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:31.338 04:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:31.338 "name": "BaseBdev1", 00:17:31.338 "aliases": [ 00:17:31.338 "344e168f-1c27-462e-a0c3-3eb5824beb92" 00:17:31.338 ], 00:17:31.338 "product_name": "Malloc disk", 00:17:31.338 "block_size": 512, 00:17:31.338 "num_blocks": 65536, 00:17:31.338 "uuid": "344e168f-1c27-462e-a0c3-3eb5824beb92", 00:17:31.338 "assigned_rate_limits": { 00:17:31.338 "rw_ios_per_sec": 0, 00:17:31.338 "rw_mbytes_per_sec": 0, 00:17:31.338 "r_mbytes_per_sec": 0, 00:17:31.338 "w_mbytes_per_sec": 0 00:17:31.338 }, 00:17:31.338 "claimed": true, 00:17:31.338 "claim_type": "exclusive_write", 00:17:31.338 "zoned": false, 00:17:31.338 "supported_io_types": { 00:17:31.338 "read": true, 00:17:31.338 "write": true, 00:17:31.338 "unmap": true, 00:17:31.338 "flush": true, 00:17:31.338 "reset": true, 00:17:31.338 "nvme_admin": false, 00:17:31.338 "nvme_io": false, 00:17:31.338 "nvme_io_md": false, 00:17:31.338 "write_zeroes": true, 00:17:31.338 "zcopy": true, 00:17:31.338 "get_zone_info": false, 00:17:31.338 "zone_management": false, 00:17:31.338 "zone_append": false, 00:17:31.338 "compare": false, 00:17:31.338 "compare_and_write": false, 00:17:31.338 "abort": true, 00:17:31.338 "seek_hole": false, 00:17:31.338 "seek_data": false, 00:17:31.338 "copy": true, 00:17:31.338 "nvme_iov_md": false 00:17:31.338 }, 00:17:31.338 "memory_domains": [ 00:17:31.338 { 00:17:31.338 "dma_device_id": "system", 00:17:31.338 "dma_device_type": 1 00:17:31.338 }, 00:17:31.338 { 00:17:31.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.338 "dma_device_type": 2 00:17:31.338 } 00:17:31.338 ], 00:17:31.338 "driver_specific": {} 00:17:31.338 }' 00:17:31.338 04:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:31.338 04:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:31.338 04:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:31.338 04:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:31.338 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:31.338 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:31.338 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:31.338 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:31.597 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:31.597 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:31.597 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:31.597 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:31.597 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:31.597 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:31.598 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:31.857 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:31.857 "name": "BaseBdev2", 00:17:31.857 "aliases": [ 00:17:31.857 "b11ee1d8-f92d-4e87-9c9e-affeb2817a97" 00:17:31.857 ], 00:17:31.857 "product_name": "Malloc disk", 00:17:31.857 "block_size": 512, 00:17:31.857 "num_blocks": 65536, 00:17:31.857 "uuid": "b11ee1d8-f92d-4e87-9c9e-affeb2817a97", 00:17:31.857 "assigned_rate_limits": { 00:17:31.857 "rw_ios_per_sec": 0, 00:17:31.857 "rw_mbytes_per_sec": 0, 00:17:31.857 "r_mbytes_per_sec": 0, 00:17:31.857 "w_mbytes_per_sec": 0 00:17:31.857 }, 00:17:31.857 "claimed": true, 00:17:31.857 "claim_type": "exclusive_write", 00:17:31.857 "zoned": false, 00:17:31.857 "supported_io_types": { 00:17:31.857 "read": true, 00:17:31.857 "write": true, 00:17:31.857 "unmap": true, 00:17:31.857 "flush": true, 00:17:31.857 "reset": true, 00:17:31.857 "nvme_admin": false, 00:17:31.857 "nvme_io": false, 00:17:31.857 "nvme_io_md": false, 00:17:31.857 "write_zeroes": true, 00:17:31.857 "zcopy": true, 00:17:31.857 "get_zone_info": false, 00:17:31.857 "zone_management": false, 00:17:31.857 "zone_append": false, 00:17:31.857 "compare": false, 00:17:31.857 "compare_and_write": false, 00:17:31.857 "abort": true, 00:17:31.857 "seek_hole": false, 00:17:31.857 "seek_data": false, 00:17:31.857 "copy": true, 00:17:31.857 "nvme_iov_md": false 00:17:31.857 }, 00:17:31.857 "memory_domains": [ 00:17:31.857 { 00:17:31.857 "dma_device_id": "system", 00:17:31.857 "dma_device_type": 1 00:17:31.857 }, 00:17:31.857 { 00:17:31.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.857 "dma_device_type": 2 00:17:31.857 } 00:17:31.857 ], 00:17:31.857 "driver_specific": {} 00:17:31.857 }' 00:17:31.857 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:31.857 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:31.857 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:31.857 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:31.857 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:32.116 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:32.116 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:32.116 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:32.116 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:32.116 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:32.116 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:32.116 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:32.116 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:32.116 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:32.116 04:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:32.376 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:32.376 "name": "BaseBdev3", 00:17:32.376 "aliases": [ 00:17:32.376 "07da442b-e76a-4126-bedb-c2c09b603b85" 00:17:32.376 ], 00:17:32.376 "product_name": "Malloc disk", 00:17:32.376 "block_size": 512, 00:17:32.376 "num_blocks": 65536, 00:17:32.376 "uuid": "07da442b-e76a-4126-bedb-c2c09b603b85", 00:17:32.376 "assigned_rate_limits": { 00:17:32.376 "rw_ios_per_sec": 0, 00:17:32.376 "rw_mbytes_per_sec": 0, 00:17:32.376 "r_mbytes_per_sec": 0, 00:17:32.376 "w_mbytes_per_sec": 0 00:17:32.376 }, 00:17:32.376 "claimed": true, 00:17:32.376 "claim_type": "exclusive_write", 00:17:32.376 "zoned": false, 00:17:32.376 "supported_io_types": { 00:17:32.376 "read": true, 00:17:32.376 "write": true, 00:17:32.376 "unmap": true, 00:17:32.376 "flush": true, 00:17:32.376 "reset": true, 00:17:32.376 "nvme_admin": false, 00:17:32.376 "nvme_io": false, 00:17:32.376 "nvme_io_md": false, 00:17:32.376 "write_zeroes": true, 00:17:32.376 "zcopy": true, 00:17:32.376 "get_zone_info": false, 00:17:32.376 "zone_management": false, 00:17:32.376 "zone_append": false, 00:17:32.376 "compare": false, 00:17:32.376 "compare_and_write": false, 00:17:32.376 "abort": true, 00:17:32.376 "seek_hole": false, 00:17:32.376 "seek_data": false, 00:17:32.376 "copy": true, 00:17:32.376 "nvme_iov_md": false 00:17:32.376 }, 00:17:32.376 "memory_domains": [ 00:17:32.376 { 00:17:32.376 "dma_device_id": "system", 00:17:32.376 "dma_device_type": 1 00:17:32.376 }, 00:17:32.376 { 00:17:32.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.376 "dma_device_type": 2 00:17:32.376 } 00:17:32.376 ], 00:17:32.376 "driver_specific": {} 00:17:32.376 }' 00:17:32.376 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:32.376 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:32.376 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:32.376 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:32.636 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:32.636 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:32.636 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:32.636 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:32.636 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:32.636 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:32.636 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:32.636 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:32.636 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:32.895 [2024-07-23 04:12:41.608233] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:32.895 [2024-07-23 04:12:41.608265] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.895 [2024-07-23 04:12:41.608325] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.895 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:32.895 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:17:32.895 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:32.895 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:17:32.895 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:32.895 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:17:32.895 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:32.895 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:32.895 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:32.895 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:32.895 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:32.895 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:32.895 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:32.895 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:32.895 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:32.895 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.895 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.153 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:33.153 "name": "Existed_Raid", 00:17:33.153 "uuid": "40fc3c17-2354-4342-8842-f1c2b0815fd3", 00:17:33.153 "strip_size_kb": 64, 00:17:33.153 "state": "offline", 00:17:33.153 "raid_level": "raid0", 00:17:33.153 "superblock": true, 00:17:33.153 "num_base_bdevs": 3, 00:17:33.153 "num_base_bdevs_discovered": 2, 00:17:33.153 "num_base_bdevs_operational": 2, 00:17:33.153 "base_bdevs_list": [ 00:17:33.153 { 00:17:33.153 "name": null, 00:17:33.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.153 "is_configured": false, 00:17:33.153 "data_offset": 2048, 00:17:33.153 "data_size": 63488 00:17:33.153 }, 00:17:33.153 { 00:17:33.153 "name": "BaseBdev2", 00:17:33.153 "uuid": "b11ee1d8-f92d-4e87-9c9e-affeb2817a97", 00:17:33.153 "is_configured": true, 00:17:33.153 "data_offset": 2048, 00:17:33.153 "data_size": 63488 00:17:33.153 }, 00:17:33.153 { 00:17:33.153 "name": "BaseBdev3", 00:17:33.153 "uuid": "07da442b-e76a-4126-bedb-c2c09b603b85", 00:17:33.153 "is_configured": true, 00:17:33.153 "data_offset": 2048, 00:17:33.153 "data_size": 63488 00:17:33.153 } 00:17:33.153 ] 00:17:33.153 }' 00:17:33.153 04:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:33.153 04:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.722 04:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:33.722 04:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:33.722 04:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.722 04:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:33.981 04:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:33.981 04:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:33.981 04:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:34.240 [2024-07-23 04:12:42.911001] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:34.499 04:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:34.499 04:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:34.499 04:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.499 04:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:34.758 04:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:34.758 04:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:34.758 04:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:34.758 [2024-07-23 04:12:43.498734] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:34.758 [2024-07-23 04:12:43.498792] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:17:35.018 04:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:35.018 04:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:35.018 04:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.018 04:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:35.278 04:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:35.278 04:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:35.278 04:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:17:35.278 04:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:35.278 04:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:35.278 04:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:35.538 BaseBdev2 00:17:35.538 04:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:35.538 04:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:35.538 04:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:35.538 04:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:35.538 04:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:35.538 04:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:35.538 04:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:35.797 04:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:36.057 [ 00:17:36.057 { 00:17:36.057 "name": "BaseBdev2", 00:17:36.057 "aliases": [ 00:17:36.057 "389409a4-f5ed-4525-8ae1-31195000e452" 00:17:36.057 ], 00:17:36.057 "product_name": "Malloc disk", 00:17:36.057 "block_size": 512, 00:17:36.057 "num_blocks": 65536, 00:17:36.057 "uuid": "389409a4-f5ed-4525-8ae1-31195000e452", 00:17:36.057 "assigned_rate_limits": { 00:17:36.057 "rw_ios_per_sec": 0, 00:17:36.057 "rw_mbytes_per_sec": 0, 00:17:36.057 "r_mbytes_per_sec": 0, 00:17:36.057 "w_mbytes_per_sec": 0 00:17:36.057 }, 00:17:36.057 "claimed": false, 00:17:36.057 "zoned": false, 00:17:36.057 "supported_io_types": { 00:17:36.057 "read": true, 00:17:36.057 "write": true, 00:17:36.057 "unmap": true, 00:17:36.057 "flush": true, 00:17:36.057 "reset": true, 00:17:36.057 "nvme_admin": false, 00:17:36.057 "nvme_io": false, 00:17:36.057 "nvme_io_md": false, 00:17:36.057 "write_zeroes": true, 00:17:36.057 "zcopy": true, 00:17:36.057 "get_zone_info": false, 00:17:36.057 "zone_management": false, 00:17:36.057 "zone_append": false, 00:17:36.057 "compare": false, 00:17:36.057 "compare_and_write": false, 00:17:36.057 "abort": true, 00:17:36.057 "seek_hole": false, 00:17:36.057 "seek_data": false, 00:17:36.057 "copy": true, 00:17:36.057 "nvme_iov_md": false 00:17:36.057 }, 00:17:36.057 "memory_domains": [ 00:17:36.057 { 00:17:36.057 "dma_device_id": "system", 00:17:36.057 "dma_device_type": 1 00:17:36.057 }, 00:17:36.057 { 00:17:36.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.057 "dma_device_type": 2 00:17:36.057 } 00:17:36.057 ], 00:17:36.057 "driver_specific": {} 00:17:36.057 } 00:17:36.057 ] 00:17:36.057 04:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:36.057 04:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:36.057 04:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:36.057 04:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:36.316 BaseBdev3 00:17:36.316 04:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:36.316 04:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:36.316 04:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:36.316 04:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:36.316 04:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:36.316 04:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:36.316 04:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:36.316 04:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:36.576 [ 00:17:36.576 { 00:17:36.576 "name": "BaseBdev3", 00:17:36.576 "aliases": [ 00:17:36.576 "c22afdc1-29cd-48bc-ac3c-33d17aca7af2" 00:17:36.576 ], 00:17:36.576 "product_name": "Malloc disk", 00:17:36.576 "block_size": 512, 00:17:36.576 "num_blocks": 65536, 00:17:36.576 "uuid": "c22afdc1-29cd-48bc-ac3c-33d17aca7af2", 00:17:36.576 "assigned_rate_limits": { 00:17:36.576 "rw_ios_per_sec": 0, 00:17:36.576 "rw_mbytes_per_sec": 0, 00:17:36.576 "r_mbytes_per_sec": 0, 00:17:36.576 "w_mbytes_per_sec": 0 00:17:36.576 }, 00:17:36.576 "claimed": false, 00:17:36.576 "zoned": false, 00:17:36.576 "supported_io_types": { 00:17:36.576 "read": true, 00:17:36.576 "write": true, 00:17:36.576 "unmap": true, 00:17:36.576 "flush": true, 00:17:36.576 "reset": true, 00:17:36.576 "nvme_admin": false, 00:17:36.576 "nvme_io": false, 00:17:36.576 "nvme_io_md": false, 00:17:36.576 "write_zeroes": true, 00:17:36.576 "zcopy": true, 00:17:36.576 "get_zone_info": false, 00:17:36.576 "zone_management": false, 00:17:36.576 "zone_append": false, 00:17:36.576 "compare": false, 00:17:36.576 "compare_and_write": false, 00:17:36.576 "abort": true, 00:17:36.576 "seek_hole": false, 00:17:36.576 "seek_data": false, 00:17:36.576 "copy": true, 00:17:36.576 "nvme_iov_md": false 00:17:36.576 }, 00:17:36.576 "memory_domains": [ 00:17:36.576 { 00:17:36.576 "dma_device_id": "system", 00:17:36.576 "dma_device_type": 1 00:17:36.576 }, 00:17:36.576 { 00:17:36.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.576 "dma_device_type": 2 00:17:36.576 } 00:17:36.576 ], 00:17:36.576 "driver_specific": {} 00:17:36.576 } 00:17:36.576 ] 00:17:36.576 04:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:36.576 04:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:36.576 04:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:36.576 04:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:36.836 [2024-07-23 04:12:45.532081] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:36.836 [2024-07-23 04:12:45.532128] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:36.836 [2024-07-23 04:12:45.532168] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:36.836 [2024-07-23 04:12:45.534468] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:36.836 04:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:36.836 04:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:36.836 04:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:36.836 04:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:36.836 04:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:36.836 04:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:36.836 04:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:36.836 04:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:36.836 04:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:36.836 04:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:36.836 04:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.836 04:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.095 04:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:37.095 "name": "Existed_Raid", 00:17:37.095 "uuid": "ccec5237-0049-4d52-a166-51b5991684ad", 00:17:37.095 "strip_size_kb": 64, 00:17:37.095 "state": "configuring", 00:17:37.095 "raid_level": "raid0", 00:17:37.096 "superblock": true, 00:17:37.096 "num_base_bdevs": 3, 00:17:37.096 "num_base_bdevs_discovered": 2, 00:17:37.096 "num_base_bdevs_operational": 3, 00:17:37.096 "base_bdevs_list": [ 00:17:37.096 { 00:17:37.096 "name": "BaseBdev1", 00:17:37.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.096 "is_configured": false, 00:17:37.096 "data_offset": 0, 00:17:37.096 "data_size": 0 00:17:37.096 }, 00:17:37.096 { 00:17:37.096 "name": "BaseBdev2", 00:17:37.096 "uuid": "389409a4-f5ed-4525-8ae1-31195000e452", 00:17:37.096 "is_configured": true, 00:17:37.096 "data_offset": 2048, 00:17:37.096 "data_size": 63488 00:17:37.096 }, 00:17:37.096 { 00:17:37.096 "name": "BaseBdev3", 00:17:37.096 "uuid": "c22afdc1-29cd-48bc-ac3c-33d17aca7af2", 00:17:37.096 "is_configured": true, 00:17:37.096 "data_offset": 2048, 00:17:37.096 "data_size": 63488 00:17:37.096 } 00:17:37.096 ] 00:17:37.096 }' 00:17:37.096 04:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:37.096 04:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.664 04:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:37.922 [2024-07-23 04:12:46.498672] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:37.922 04:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:37.922 04:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:37.922 04:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:37.922 04:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:37.922 04:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:37.922 04:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:37.922 04:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:37.922 04:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:37.922 04:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:37.922 04:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:37.922 04:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.922 04:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.922 04:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:37.922 "name": "Existed_Raid", 00:17:37.922 "uuid": "ccec5237-0049-4d52-a166-51b5991684ad", 00:17:37.922 "strip_size_kb": 64, 00:17:37.922 "state": "configuring", 00:17:37.922 "raid_level": "raid0", 00:17:37.922 "superblock": true, 00:17:37.922 "num_base_bdevs": 3, 00:17:37.922 "num_base_bdevs_discovered": 1, 00:17:37.922 "num_base_bdevs_operational": 3, 00:17:37.922 "base_bdevs_list": [ 00:17:37.922 { 00:17:37.922 "name": "BaseBdev1", 00:17:37.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.922 "is_configured": false, 00:17:37.922 "data_offset": 0, 00:17:37.922 "data_size": 0 00:17:37.922 }, 00:17:37.922 { 00:17:37.922 "name": null, 00:17:37.922 "uuid": "389409a4-f5ed-4525-8ae1-31195000e452", 00:17:37.922 "is_configured": false, 00:17:37.922 "data_offset": 2048, 00:17:37.922 "data_size": 63488 00:17:37.922 }, 00:17:37.922 { 00:17:37.922 "name": "BaseBdev3", 00:17:37.922 "uuid": "c22afdc1-29cd-48bc-ac3c-33d17aca7af2", 00:17:37.922 "is_configured": true, 00:17:37.922 "data_offset": 2048, 00:17:37.922 "data_size": 63488 00:17:37.922 } 00:17:37.922 ] 00:17:37.922 }' 00:17:37.922 04:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:37.922 04:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.859 04:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.859 04:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:38.859 04:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:38.859 04:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:39.119 [2024-07-23 04:12:47.762515] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:39.119 BaseBdev1 00:17:39.119 04:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:39.119 04:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:39.119 04:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:39.119 04:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:39.119 04:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:39.119 04:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:39.119 04:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:39.443 04:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:39.443 [ 00:17:39.443 { 00:17:39.443 "name": "BaseBdev1", 00:17:39.443 "aliases": [ 00:17:39.443 "6bd0c07e-6f45-4c4b-9285-d759e0ad14dd" 00:17:39.443 ], 00:17:39.443 "product_name": "Malloc disk", 00:17:39.443 "block_size": 512, 00:17:39.443 "num_blocks": 65536, 00:17:39.443 "uuid": "6bd0c07e-6f45-4c4b-9285-d759e0ad14dd", 00:17:39.443 "assigned_rate_limits": { 00:17:39.443 "rw_ios_per_sec": 0, 00:17:39.443 "rw_mbytes_per_sec": 0, 00:17:39.443 "r_mbytes_per_sec": 0, 00:17:39.443 "w_mbytes_per_sec": 0 00:17:39.443 }, 00:17:39.443 "claimed": true, 00:17:39.443 "claim_type": "exclusive_write", 00:17:39.443 "zoned": false, 00:17:39.443 "supported_io_types": { 00:17:39.443 "read": true, 00:17:39.443 "write": true, 00:17:39.443 "unmap": true, 00:17:39.443 "flush": true, 00:17:39.443 "reset": true, 00:17:39.443 "nvme_admin": false, 00:17:39.443 "nvme_io": false, 00:17:39.443 "nvme_io_md": false, 00:17:39.443 "write_zeroes": true, 00:17:39.443 "zcopy": true, 00:17:39.443 "get_zone_info": false, 00:17:39.443 "zone_management": false, 00:17:39.443 "zone_append": false, 00:17:39.443 "compare": false, 00:17:39.443 "compare_and_write": false, 00:17:39.443 "abort": true, 00:17:39.443 "seek_hole": false, 00:17:39.443 "seek_data": false, 00:17:39.443 "copy": true, 00:17:39.443 "nvme_iov_md": false 00:17:39.443 }, 00:17:39.443 "memory_domains": [ 00:17:39.443 { 00:17:39.443 "dma_device_id": "system", 00:17:39.443 "dma_device_type": 1 00:17:39.443 }, 00:17:39.443 { 00:17:39.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.443 "dma_device_type": 2 00:17:39.443 } 00:17:39.443 ], 00:17:39.443 "driver_specific": {} 00:17:39.443 } 00:17:39.443 ] 00:17:39.702 04:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:39.702 04:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:39.702 04:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:39.702 04:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:39.702 04:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:39.702 04:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:39.702 04:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:39.702 04:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:39.702 04:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:39.702 04:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:39.702 04:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:39.702 04:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.702 04:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.702 04:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:39.702 "name": "Existed_Raid", 00:17:39.702 "uuid": "ccec5237-0049-4d52-a166-51b5991684ad", 00:17:39.702 "strip_size_kb": 64, 00:17:39.702 "state": "configuring", 00:17:39.702 "raid_level": "raid0", 00:17:39.702 "superblock": true, 00:17:39.702 "num_base_bdevs": 3, 00:17:39.702 "num_base_bdevs_discovered": 2, 00:17:39.702 "num_base_bdevs_operational": 3, 00:17:39.702 "base_bdevs_list": [ 00:17:39.702 { 00:17:39.702 "name": "BaseBdev1", 00:17:39.702 "uuid": "6bd0c07e-6f45-4c4b-9285-d759e0ad14dd", 00:17:39.702 "is_configured": true, 00:17:39.702 "data_offset": 2048, 00:17:39.702 "data_size": 63488 00:17:39.702 }, 00:17:39.702 { 00:17:39.702 "name": null, 00:17:39.702 "uuid": "389409a4-f5ed-4525-8ae1-31195000e452", 00:17:39.702 "is_configured": false, 00:17:39.702 "data_offset": 2048, 00:17:39.702 "data_size": 63488 00:17:39.702 }, 00:17:39.702 { 00:17:39.702 "name": "BaseBdev3", 00:17:39.702 "uuid": "c22afdc1-29cd-48bc-ac3c-33d17aca7af2", 00:17:39.702 "is_configured": true, 00:17:39.702 "data_offset": 2048, 00:17:39.702 "data_size": 63488 00:17:39.702 } 00:17:39.702 ] 00:17:39.702 }' 00:17:39.702 04:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:39.702 04:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.272 04:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.272 04:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:40.535 04:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:40.535 04:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:40.794 [2024-07-23 04:12:49.459224] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:40.794 04:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:40.794 04:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:40.794 04:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:40.794 04:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:40.794 04:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:40.794 04:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:40.794 04:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:40.794 04:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:40.794 04:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:40.794 04:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:40.794 04:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.794 04:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.054 04:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:41.054 "name": "Existed_Raid", 00:17:41.054 "uuid": "ccec5237-0049-4d52-a166-51b5991684ad", 00:17:41.054 "strip_size_kb": 64, 00:17:41.054 "state": "configuring", 00:17:41.054 "raid_level": "raid0", 00:17:41.054 "superblock": true, 00:17:41.054 "num_base_bdevs": 3, 00:17:41.054 "num_base_bdevs_discovered": 1, 00:17:41.054 "num_base_bdevs_operational": 3, 00:17:41.054 "base_bdevs_list": [ 00:17:41.054 { 00:17:41.054 "name": "BaseBdev1", 00:17:41.054 "uuid": "6bd0c07e-6f45-4c4b-9285-d759e0ad14dd", 00:17:41.054 "is_configured": true, 00:17:41.054 "data_offset": 2048, 00:17:41.054 "data_size": 63488 00:17:41.054 }, 00:17:41.054 { 00:17:41.054 "name": null, 00:17:41.054 "uuid": "389409a4-f5ed-4525-8ae1-31195000e452", 00:17:41.054 "is_configured": false, 00:17:41.054 "data_offset": 2048, 00:17:41.054 "data_size": 63488 00:17:41.054 }, 00:17:41.054 { 00:17:41.054 "name": null, 00:17:41.054 "uuid": "c22afdc1-29cd-48bc-ac3c-33d17aca7af2", 00:17:41.054 "is_configured": false, 00:17:41.054 "data_offset": 2048, 00:17:41.054 "data_size": 63488 00:17:41.054 } 00:17:41.054 ] 00:17:41.054 }' 00:17:41.054 04:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:41.054 04:12:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.623 04:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.623 04:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:41.883 04:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:41.883 04:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:42.203 [2024-07-23 04:12:50.706605] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:42.203 04:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:42.203 04:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:42.203 04:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:42.203 04:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:42.203 04:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:42.203 04:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:42.203 04:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:42.203 04:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:42.203 04:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:42.203 04:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:42.203 04:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.203 04:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.203 04:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:42.203 "name": "Existed_Raid", 00:17:42.203 "uuid": "ccec5237-0049-4d52-a166-51b5991684ad", 00:17:42.203 "strip_size_kb": 64, 00:17:42.203 "state": "configuring", 00:17:42.203 "raid_level": "raid0", 00:17:42.203 "superblock": true, 00:17:42.203 "num_base_bdevs": 3, 00:17:42.203 "num_base_bdevs_discovered": 2, 00:17:42.203 "num_base_bdevs_operational": 3, 00:17:42.203 "base_bdevs_list": [ 00:17:42.203 { 00:17:42.203 "name": "BaseBdev1", 00:17:42.203 "uuid": "6bd0c07e-6f45-4c4b-9285-d759e0ad14dd", 00:17:42.203 "is_configured": true, 00:17:42.203 "data_offset": 2048, 00:17:42.203 "data_size": 63488 00:17:42.203 }, 00:17:42.203 { 00:17:42.203 "name": null, 00:17:42.203 "uuid": "389409a4-f5ed-4525-8ae1-31195000e452", 00:17:42.203 "is_configured": false, 00:17:42.203 "data_offset": 2048, 00:17:42.203 "data_size": 63488 00:17:42.203 }, 00:17:42.203 { 00:17:42.203 "name": "BaseBdev3", 00:17:42.204 "uuid": "c22afdc1-29cd-48bc-ac3c-33d17aca7af2", 00:17:42.204 "is_configured": true, 00:17:42.204 "data_offset": 2048, 00:17:42.204 "data_size": 63488 00:17:42.204 } 00:17:42.204 ] 00:17:42.204 }' 00:17:42.204 04:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:42.204 04:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.773 04:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.773 04:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:43.033 04:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:43.033 04:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:43.293 [2024-07-23 04:12:51.962026] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:43.553 04:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:43.553 04:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:43.553 04:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:43.553 04:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:43.553 04:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:43.553 04:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:43.553 04:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:43.553 04:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:43.553 04:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:43.553 04:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:43.553 04:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.553 04:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.813 04:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:43.813 "name": "Existed_Raid", 00:17:43.813 "uuid": "ccec5237-0049-4d52-a166-51b5991684ad", 00:17:43.813 "strip_size_kb": 64, 00:17:43.813 "state": "configuring", 00:17:43.813 "raid_level": "raid0", 00:17:43.813 "superblock": true, 00:17:43.813 "num_base_bdevs": 3, 00:17:43.813 "num_base_bdevs_discovered": 1, 00:17:43.813 "num_base_bdevs_operational": 3, 00:17:43.813 "base_bdevs_list": [ 00:17:43.813 { 00:17:43.813 "name": null, 00:17:43.813 "uuid": "6bd0c07e-6f45-4c4b-9285-d759e0ad14dd", 00:17:43.813 "is_configured": false, 00:17:43.813 "data_offset": 2048, 00:17:43.813 "data_size": 63488 00:17:43.813 }, 00:17:43.813 { 00:17:43.813 "name": null, 00:17:43.813 "uuid": "389409a4-f5ed-4525-8ae1-31195000e452", 00:17:43.813 "is_configured": false, 00:17:43.813 "data_offset": 2048, 00:17:43.813 "data_size": 63488 00:17:43.813 }, 00:17:43.813 { 00:17:43.813 "name": "BaseBdev3", 00:17:43.813 "uuid": "c22afdc1-29cd-48bc-ac3c-33d17aca7af2", 00:17:43.813 "is_configured": true, 00:17:43.813 "data_offset": 2048, 00:17:43.813 "data_size": 63488 00:17:43.813 } 00:17:43.813 ] 00:17:43.813 }' 00:17:43.813 04:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:43.813 04:12:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.382 04:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.382 04:12:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:44.382 04:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:44.382 04:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:44.642 [2024-07-23 04:12:53.293118] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:44.642 04:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:44.642 04:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:44.642 04:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:44.642 04:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:44.642 04:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:44.642 04:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:44.642 04:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:44.642 04:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:44.642 04:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:44.642 04:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:44.642 04:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.642 04:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.901 04:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:44.901 "name": "Existed_Raid", 00:17:44.901 "uuid": "ccec5237-0049-4d52-a166-51b5991684ad", 00:17:44.901 "strip_size_kb": 64, 00:17:44.901 "state": "configuring", 00:17:44.901 "raid_level": "raid0", 00:17:44.901 "superblock": true, 00:17:44.901 "num_base_bdevs": 3, 00:17:44.901 "num_base_bdevs_discovered": 2, 00:17:44.901 "num_base_bdevs_operational": 3, 00:17:44.901 "base_bdevs_list": [ 00:17:44.901 { 00:17:44.901 "name": null, 00:17:44.901 "uuid": "6bd0c07e-6f45-4c4b-9285-d759e0ad14dd", 00:17:44.901 "is_configured": false, 00:17:44.901 "data_offset": 2048, 00:17:44.901 "data_size": 63488 00:17:44.901 }, 00:17:44.901 { 00:17:44.901 "name": "BaseBdev2", 00:17:44.901 "uuid": "389409a4-f5ed-4525-8ae1-31195000e452", 00:17:44.901 "is_configured": true, 00:17:44.901 "data_offset": 2048, 00:17:44.901 "data_size": 63488 00:17:44.901 }, 00:17:44.901 { 00:17:44.901 "name": "BaseBdev3", 00:17:44.901 "uuid": "c22afdc1-29cd-48bc-ac3c-33d17aca7af2", 00:17:44.901 "is_configured": true, 00:17:44.901 "data_offset": 2048, 00:17:44.901 "data_size": 63488 00:17:44.901 } 00:17:44.901 ] 00:17:44.901 }' 00:17:44.901 04:12:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:44.901 04:12:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.470 04:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.470 04:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:45.729 04:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:45.729 04:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.730 04:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:45.989 04:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 6bd0c07e-6f45-4c4b-9285-d759e0ad14dd 00:17:46.249 [2024-07-23 04:12:54.805959] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:46.249 [2024-07-23 04:12:54.806213] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041780 00:17:46.249 [2024-07-23 04:12:54.806237] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:46.249 [2024-07-23 04:12:54.806533] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:17:46.249 [2024-07-23 04:12:54.806743] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041780 00:17:46.249 [2024-07-23 04:12:54.806757] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000041780 00:17:46.249 NewBaseBdev 00:17:46.249 [2024-07-23 04:12:54.806949] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.249 04:12:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:46.249 04:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:17:46.249 04:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:46.249 04:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:46.249 04:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:46.249 04:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:46.249 04:12:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:46.509 04:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:46.509 [ 00:17:46.509 { 00:17:46.509 "name": "NewBaseBdev", 00:17:46.509 "aliases": [ 00:17:46.509 "6bd0c07e-6f45-4c4b-9285-d759e0ad14dd" 00:17:46.509 ], 00:17:46.509 "product_name": "Malloc disk", 00:17:46.509 "block_size": 512, 00:17:46.509 "num_blocks": 65536, 00:17:46.509 "uuid": "6bd0c07e-6f45-4c4b-9285-d759e0ad14dd", 00:17:46.509 "assigned_rate_limits": { 00:17:46.509 "rw_ios_per_sec": 0, 00:17:46.509 "rw_mbytes_per_sec": 0, 00:17:46.509 "r_mbytes_per_sec": 0, 00:17:46.509 "w_mbytes_per_sec": 0 00:17:46.509 }, 00:17:46.509 "claimed": true, 00:17:46.509 "claim_type": "exclusive_write", 00:17:46.509 "zoned": false, 00:17:46.509 "supported_io_types": { 00:17:46.509 "read": true, 00:17:46.509 "write": true, 00:17:46.509 "unmap": true, 00:17:46.509 "flush": true, 00:17:46.509 "reset": true, 00:17:46.509 "nvme_admin": false, 00:17:46.509 "nvme_io": false, 00:17:46.509 "nvme_io_md": false, 00:17:46.509 "write_zeroes": true, 00:17:46.509 "zcopy": true, 00:17:46.509 "get_zone_info": false, 00:17:46.509 "zone_management": false, 00:17:46.509 "zone_append": false, 00:17:46.509 "compare": false, 00:17:46.509 "compare_and_write": false, 00:17:46.509 "abort": true, 00:17:46.509 "seek_hole": false, 00:17:46.509 "seek_data": false, 00:17:46.509 "copy": true, 00:17:46.509 "nvme_iov_md": false 00:17:46.509 }, 00:17:46.509 "memory_domains": [ 00:17:46.509 { 00:17:46.509 "dma_device_id": "system", 00:17:46.509 "dma_device_type": 1 00:17:46.509 }, 00:17:46.509 { 00:17:46.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.509 "dma_device_type": 2 00:17:46.509 } 00:17:46.509 ], 00:17:46.509 "driver_specific": {} 00:17:46.509 } 00:17:46.509 ] 00:17:46.509 04:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:46.509 04:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:46.509 04:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:46.509 04:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:46.509 04:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:46.509 04:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:46.509 04:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:46.509 04:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:46.509 04:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:46.509 04:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:46.509 04:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:46.509 04:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.509 04:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.769 04:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:46.769 "name": "Existed_Raid", 00:17:46.769 "uuid": "ccec5237-0049-4d52-a166-51b5991684ad", 00:17:46.769 "strip_size_kb": 64, 00:17:46.769 "state": "online", 00:17:46.769 "raid_level": "raid0", 00:17:46.769 "superblock": true, 00:17:46.769 "num_base_bdevs": 3, 00:17:46.769 "num_base_bdevs_discovered": 3, 00:17:46.769 "num_base_bdevs_operational": 3, 00:17:46.769 "base_bdevs_list": [ 00:17:46.769 { 00:17:46.769 "name": "NewBaseBdev", 00:17:46.769 "uuid": "6bd0c07e-6f45-4c4b-9285-d759e0ad14dd", 00:17:46.769 "is_configured": true, 00:17:46.769 "data_offset": 2048, 00:17:46.769 "data_size": 63488 00:17:46.769 }, 00:17:46.769 { 00:17:46.769 "name": "BaseBdev2", 00:17:46.769 "uuid": "389409a4-f5ed-4525-8ae1-31195000e452", 00:17:46.769 "is_configured": true, 00:17:46.769 "data_offset": 2048, 00:17:46.769 "data_size": 63488 00:17:46.769 }, 00:17:46.769 { 00:17:46.769 "name": "BaseBdev3", 00:17:46.769 "uuid": "c22afdc1-29cd-48bc-ac3c-33d17aca7af2", 00:17:46.769 "is_configured": true, 00:17:46.769 "data_offset": 2048, 00:17:46.769 "data_size": 63488 00:17:46.769 } 00:17:46.769 ] 00:17:46.769 }' 00:17:46.769 04:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:46.769 04:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.337 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:47.337 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:47.337 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:47.337 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:47.337 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:47.337 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:47.337 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:47.337 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:47.596 [2024-07-23 04:12:56.294461] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:47.596 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:47.596 "name": "Existed_Raid", 00:17:47.596 "aliases": [ 00:17:47.596 "ccec5237-0049-4d52-a166-51b5991684ad" 00:17:47.596 ], 00:17:47.596 "product_name": "Raid Volume", 00:17:47.596 "block_size": 512, 00:17:47.596 "num_blocks": 190464, 00:17:47.596 "uuid": "ccec5237-0049-4d52-a166-51b5991684ad", 00:17:47.596 "assigned_rate_limits": { 00:17:47.596 "rw_ios_per_sec": 0, 00:17:47.596 "rw_mbytes_per_sec": 0, 00:17:47.596 "r_mbytes_per_sec": 0, 00:17:47.596 "w_mbytes_per_sec": 0 00:17:47.596 }, 00:17:47.596 "claimed": false, 00:17:47.596 "zoned": false, 00:17:47.596 "supported_io_types": { 00:17:47.596 "read": true, 00:17:47.596 "write": true, 00:17:47.596 "unmap": true, 00:17:47.596 "flush": true, 00:17:47.596 "reset": true, 00:17:47.596 "nvme_admin": false, 00:17:47.596 "nvme_io": false, 00:17:47.596 "nvme_io_md": false, 00:17:47.596 "write_zeroes": true, 00:17:47.596 "zcopy": false, 00:17:47.596 "get_zone_info": false, 00:17:47.596 "zone_management": false, 00:17:47.596 "zone_append": false, 00:17:47.596 "compare": false, 00:17:47.596 "compare_and_write": false, 00:17:47.596 "abort": false, 00:17:47.596 "seek_hole": false, 00:17:47.596 "seek_data": false, 00:17:47.596 "copy": false, 00:17:47.596 "nvme_iov_md": false 00:17:47.596 }, 00:17:47.596 "memory_domains": [ 00:17:47.596 { 00:17:47.596 "dma_device_id": "system", 00:17:47.596 "dma_device_type": 1 00:17:47.596 }, 00:17:47.596 { 00:17:47.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.596 "dma_device_type": 2 00:17:47.596 }, 00:17:47.596 { 00:17:47.596 "dma_device_id": "system", 00:17:47.596 "dma_device_type": 1 00:17:47.596 }, 00:17:47.596 { 00:17:47.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.596 "dma_device_type": 2 00:17:47.596 }, 00:17:47.596 { 00:17:47.596 "dma_device_id": "system", 00:17:47.596 "dma_device_type": 1 00:17:47.596 }, 00:17:47.596 { 00:17:47.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.596 "dma_device_type": 2 00:17:47.596 } 00:17:47.596 ], 00:17:47.596 "driver_specific": { 00:17:47.596 "raid": { 00:17:47.596 "uuid": "ccec5237-0049-4d52-a166-51b5991684ad", 00:17:47.596 "strip_size_kb": 64, 00:17:47.596 "state": "online", 00:17:47.596 "raid_level": "raid0", 00:17:47.596 "superblock": true, 00:17:47.596 "num_base_bdevs": 3, 00:17:47.596 "num_base_bdevs_discovered": 3, 00:17:47.596 "num_base_bdevs_operational": 3, 00:17:47.596 "base_bdevs_list": [ 00:17:47.596 { 00:17:47.596 "name": "NewBaseBdev", 00:17:47.596 "uuid": "6bd0c07e-6f45-4c4b-9285-d759e0ad14dd", 00:17:47.596 "is_configured": true, 00:17:47.596 "data_offset": 2048, 00:17:47.596 "data_size": 63488 00:17:47.596 }, 00:17:47.596 { 00:17:47.596 "name": "BaseBdev2", 00:17:47.596 "uuid": "389409a4-f5ed-4525-8ae1-31195000e452", 00:17:47.596 "is_configured": true, 00:17:47.596 "data_offset": 2048, 00:17:47.596 "data_size": 63488 00:17:47.596 }, 00:17:47.596 { 00:17:47.596 "name": "BaseBdev3", 00:17:47.596 "uuid": "c22afdc1-29cd-48bc-ac3c-33d17aca7af2", 00:17:47.596 "is_configured": true, 00:17:47.596 "data_offset": 2048, 00:17:47.596 "data_size": 63488 00:17:47.596 } 00:17:47.596 ] 00:17:47.596 } 00:17:47.596 } 00:17:47.596 }' 00:17:47.597 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:47.597 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:47.597 BaseBdev2 00:17:47.597 BaseBdev3' 00:17:47.597 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:47.597 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:47.597 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:47.856 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:47.856 "name": "NewBaseBdev", 00:17:47.856 "aliases": [ 00:17:47.856 "6bd0c07e-6f45-4c4b-9285-d759e0ad14dd" 00:17:47.856 ], 00:17:47.856 "product_name": "Malloc disk", 00:17:47.856 "block_size": 512, 00:17:47.856 "num_blocks": 65536, 00:17:47.856 "uuid": "6bd0c07e-6f45-4c4b-9285-d759e0ad14dd", 00:17:47.856 "assigned_rate_limits": { 00:17:47.856 "rw_ios_per_sec": 0, 00:17:47.856 "rw_mbytes_per_sec": 0, 00:17:47.856 "r_mbytes_per_sec": 0, 00:17:47.856 "w_mbytes_per_sec": 0 00:17:47.856 }, 00:17:47.856 "claimed": true, 00:17:47.856 "claim_type": "exclusive_write", 00:17:47.856 "zoned": false, 00:17:47.856 "supported_io_types": { 00:17:47.856 "read": true, 00:17:47.856 "write": true, 00:17:47.856 "unmap": true, 00:17:47.856 "flush": true, 00:17:47.856 "reset": true, 00:17:47.856 "nvme_admin": false, 00:17:47.856 "nvme_io": false, 00:17:47.856 "nvme_io_md": false, 00:17:47.856 "write_zeroes": true, 00:17:47.856 "zcopy": true, 00:17:47.856 "get_zone_info": false, 00:17:47.856 "zone_management": false, 00:17:47.856 "zone_append": false, 00:17:47.856 "compare": false, 00:17:47.856 "compare_and_write": false, 00:17:47.856 "abort": true, 00:17:47.856 "seek_hole": false, 00:17:47.856 "seek_data": false, 00:17:47.856 "copy": true, 00:17:47.856 "nvme_iov_md": false 00:17:47.856 }, 00:17:47.856 "memory_domains": [ 00:17:47.856 { 00:17:47.856 "dma_device_id": "system", 00:17:47.856 "dma_device_type": 1 00:17:47.856 }, 00:17:47.856 { 00:17:47.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.856 "dma_device_type": 2 00:17:47.856 } 00:17:47.856 ], 00:17:47.856 "driver_specific": {} 00:17:47.856 }' 00:17:47.856 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:47.856 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:48.115 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:48.115 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:48.115 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:48.115 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:48.115 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:48.115 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:48.115 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:48.115 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:48.115 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:48.374 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:48.374 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:48.374 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:48.374 04:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:48.374 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:48.374 "name": "BaseBdev2", 00:17:48.374 "aliases": [ 00:17:48.374 "389409a4-f5ed-4525-8ae1-31195000e452" 00:17:48.374 ], 00:17:48.374 "product_name": "Malloc disk", 00:17:48.374 "block_size": 512, 00:17:48.374 "num_blocks": 65536, 00:17:48.374 "uuid": "389409a4-f5ed-4525-8ae1-31195000e452", 00:17:48.374 "assigned_rate_limits": { 00:17:48.374 "rw_ios_per_sec": 0, 00:17:48.374 "rw_mbytes_per_sec": 0, 00:17:48.374 "r_mbytes_per_sec": 0, 00:17:48.374 "w_mbytes_per_sec": 0 00:17:48.374 }, 00:17:48.374 "claimed": true, 00:17:48.374 "claim_type": "exclusive_write", 00:17:48.374 "zoned": false, 00:17:48.374 "supported_io_types": { 00:17:48.374 "read": true, 00:17:48.374 "write": true, 00:17:48.374 "unmap": true, 00:17:48.374 "flush": true, 00:17:48.374 "reset": true, 00:17:48.374 "nvme_admin": false, 00:17:48.374 "nvme_io": false, 00:17:48.374 "nvme_io_md": false, 00:17:48.374 "write_zeroes": true, 00:17:48.374 "zcopy": true, 00:17:48.374 "get_zone_info": false, 00:17:48.374 "zone_management": false, 00:17:48.374 "zone_append": false, 00:17:48.374 "compare": false, 00:17:48.374 "compare_and_write": false, 00:17:48.374 "abort": true, 00:17:48.374 "seek_hole": false, 00:17:48.374 "seek_data": false, 00:17:48.374 "copy": true, 00:17:48.374 "nvme_iov_md": false 00:17:48.374 }, 00:17:48.374 "memory_domains": [ 00:17:48.374 { 00:17:48.374 "dma_device_id": "system", 00:17:48.374 "dma_device_type": 1 00:17:48.374 }, 00:17:48.374 { 00:17:48.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.374 "dma_device_type": 2 00:17:48.374 } 00:17:48.374 ], 00:17:48.374 "driver_specific": {} 00:17:48.374 }' 00:17:48.374 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:48.633 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:48.633 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:48.633 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:48.633 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:48.633 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:48.633 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:48.633 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:48.633 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:48.633 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:48.893 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:48.893 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:48.893 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:48.893 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:48.893 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:49.152 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:49.152 "name": "BaseBdev3", 00:17:49.152 "aliases": [ 00:17:49.152 "c22afdc1-29cd-48bc-ac3c-33d17aca7af2" 00:17:49.152 ], 00:17:49.152 "product_name": "Malloc disk", 00:17:49.152 "block_size": 512, 00:17:49.152 "num_blocks": 65536, 00:17:49.152 "uuid": "c22afdc1-29cd-48bc-ac3c-33d17aca7af2", 00:17:49.152 "assigned_rate_limits": { 00:17:49.152 "rw_ios_per_sec": 0, 00:17:49.152 "rw_mbytes_per_sec": 0, 00:17:49.152 "r_mbytes_per_sec": 0, 00:17:49.152 "w_mbytes_per_sec": 0 00:17:49.152 }, 00:17:49.152 "claimed": true, 00:17:49.152 "claim_type": "exclusive_write", 00:17:49.152 "zoned": false, 00:17:49.152 "supported_io_types": { 00:17:49.152 "read": true, 00:17:49.152 "write": true, 00:17:49.152 "unmap": true, 00:17:49.152 "flush": true, 00:17:49.152 "reset": true, 00:17:49.152 "nvme_admin": false, 00:17:49.152 "nvme_io": false, 00:17:49.152 "nvme_io_md": false, 00:17:49.152 "write_zeroes": true, 00:17:49.152 "zcopy": true, 00:17:49.152 "get_zone_info": false, 00:17:49.152 "zone_management": false, 00:17:49.152 "zone_append": false, 00:17:49.152 "compare": false, 00:17:49.152 "compare_and_write": false, 00:17:49.152 "abort": true, 00:17:49.152 "seek_hole": false, 00:17:49.152 "seek_data": false, 00:17:49.152 "copy": true, 00:17:49.152 "nvme_iov_md": false 00:17:49.152 }, 00:17:49.152 "memory_domains": [ 00:17:49.152 { 00:17:49.152 "dma_device_id": "system", 00:17:49.152 "dma_device_type": 1 00:17:49.152 }, 00:17:49.152 { 00:17:49.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.152 "dma_device_type": 2 00:17:49.152 } 00:17:49.152 ], 00:17:49.152 "driver_specific": {} 00:17:49.152 }' 00:17:49.152 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:49.152 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:49.152 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:49.152 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:49.152 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:49.152 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:49.152 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:49.152 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:49.411 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:49.411 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:49.411 04:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:49.411 04:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:49.411 04:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:49.671 [2024-07-23 04:12:58.247354] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:49.671 [2024-07-23 04:12:58.247385] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:49.671 [2024-07-23 04:12:58.247468] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.671 [2024-07-23 04:12:58.247531] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:49.671 [2024-07-23 04:12:58.247554] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041780 name Existed_Raid, state offline 00:17:49.671 04:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2650080 00:17:49.671 04:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2650080 ']' 00:17:49.671 04:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2650080 00:17:49.671 04:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:17:49.671 04:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:49.671 04:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2650080 00:17:49.671 04:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:49.671 04:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:49.671 04:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2650080' 00:17:49.671 killing process with pid 2650080 00:17:49.671 04:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2650080 00:17:49.671 [2024-07-23 04:12:58.319223] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:49.671 04:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2650080 00:17:49.931 [2024-07-23 04:12:58.649671] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:51.837 04:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:17:51.837 00:17:51.837 real 0m29.432s 00:17:51.837 user 0m51.456s 00:17:51.837 sys 0m5.121s 00:17:51.837 04:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:51.837 04:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.837 ************************************ 00:17:51.837 END TEST raid_state_function_test_sb 00:17:51.837 ************************************ 00:17:51.837 04:13:00 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:51.837 04:13:00 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:17:51.837 04:13:00 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:51.837 04:13:00 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.837 04:13:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:51.837 ************************************ 00:17:51.837 START TEST raid_superblock_test 00:17:51.837 ************************************ 00:17:51.837 04:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 3 00:17:51.837 04:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:17:51.837 04:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:17:51.837 04:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:51.837 04:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:51.837 04:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:51.837 04:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:51.837 04:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:51.837 04:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:51.837 04:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:51.837 04:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:51.837 04:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:51.837 04:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:51.837 04:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:51.837 04:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:17:51.837 04:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:17:51.837 04:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:17:51.838 04:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:51.838 04:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2655686 00:17:51.838 04:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2655686 /var/tmp/spdk-raid.sock 00:17:51.838 04:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2655686 ']' 00:17:51.838 04:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:51.838 04:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.838 04:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:51.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:51.838 04:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.838 04:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.838 [2024-07-23 04:13:00.542521] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:17:51.838 [2024-07-23 04:13:00.542637] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2655686 ] 00:17:52.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:52.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:52.097 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:52.097 [2024-07-23 04:13:00.768546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.356 [2024-07-23 04:13:01.030948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.615 [2024-07-23 04:13:01.340946] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:52.615 [2024-07-23 04:13:01.340978] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:52.874 04:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.874 04:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:17:52.874 04:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:52.874 04:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:52.874 04:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:52.874 04:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:52.874 04:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:52.874 04:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:52.874 04:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:52.874 04:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:52.874 04:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:53.133 malloc1 00:17:53.133 04:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:53.391 [2024-07-23 04:13:01.989159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:53.391 [2024-07-23 04:13:01.989221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.391 [2024-07-23 04:13:01.989250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:17:53.391 [2024-07-23 04:13:01.989266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.391 [2024-07-23 04:13:01.992021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.391 [2024-07-23 04:13:01.992056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:53.391 pt1 00:17:53.391 04:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:53.391 04:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:53.391 04:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:53.391 04:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:53.391 04:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:53.391 04:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:53.391 04:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:53.391 04:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:53.391 04:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:53.650 malloc2 00:17:53.650 04:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:53.908 [2024-07-23 04:13:02.491670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:53.908 [2024-07-23 04:13:02.491727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.908 [2024-07-23 04:13:02.491755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:17:53.908 [2024-07-23 04:13:02.491774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.908 [2024-07-23 04:13:02.494567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.908 [2024-07-23 04:13:02.494606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:53.908 pt2 00:17:53.908 04:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:53.908 04:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:53.909 04:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:17:53.909 04:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:17:53.909 04:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:53.909 04:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:53.909 04:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:53.909 04:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:53.909 04:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:54.167 malloc3 00:17:54.167 04:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:54.426 [2024-07-23 04:13:02.987739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:54.426 [2024-07-23 04:13:02.987801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.426 [2024-07-23 04:13:02.987832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:17:54.426 [2024-07-23 04:13:02.987847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.426 [2024-07-23 04:13:02.990603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.426 [2024-07-23 04:13:02.990638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:54.426 pt3 00:17:54.426 04:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:54.426 04:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:54.426 04:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:54.426 [2024-07-23 04:13:03.200358] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:54.426 [2024-07-23 04:13:03.202609] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:54.426 [2024-07-23 04:13:03.202694] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:54.426 [2024-07-23 04:13:03.202916] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041480 00:17:54.426 [2024-07-23 04:13:03.202938] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:54.426 [2024-07-23 04:13:03.203273] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:17:54.426 [2024-07-23 04:13:03.203517] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041480 00:17:54.426 [2024-07-23 04:13:03.203532] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041480 00:17:54.426 [2024-07-23 04:13:03.203718] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.684 04:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:54.684 04:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:54.684 04:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:54.684 04:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:54.685 04:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:54.685 04:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:54.685 04:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:54.685 04:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:54.685 04:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:54.685 04:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:54.685 04:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.685 04:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.685 04:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:54.685 "name": "raid_bdev1", 00:17:54.685 "uuid": "e3e2d1ad-d9c2-4678-b26e-4eb1e06d93cf", 00:17:54.685 "strip_size_kb": 64, 00:17:54.685 "state": "online", 00:17:54.685 "raid_level": "raid0", 00:17:54.685 "superblock": true, 00:17:54.685 "num_base_bdevs": 3, 00:17:54.685 "num_base_bdevs_discovered": 3, 00:17:54.685 "num_base_bdevs_operational": 3, 00:17:54.685 "base_bdevs_list": [ 00:17:54.685 { 00:17:54.685 "name": "pt1", 00:17:54.685 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:54.685 "is_configured": true, 00:17:54.685 "data_offset": 2048, 00:17:54.685 "data_size": 63488 00:17:54.685 }, 00:17:54.685 { 00:17:54.685 "name": "pt2", 00:17:54.685 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:54.685 "is_configured": true, 00:17:54.685 "data_offset": 2048, 00:17:54.685 "data_size": 63488 00:17:54.685 }, 00:17:54.685 { 00:17:54.685 "name": "pt3", 00:17:54.685 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:54.685 "is_configured": true, 00:17:54.685 "data_offset": 2048, 00:17:54.685 "data_size": 63488 00:17:54.685 } 00:17:54.685 ] 00:17:54.685 }' 00:17:54.685 04:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:54.685 04:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.251 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:55.251 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:55.251 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:55.251 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:55.251 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:55.251 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:55.251 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:55.251 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:55.510 [2024-07-23 04:13:04.231480] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.510 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:55.510 "name": "raid_bdev1", 00:17:55.510 "aliases": [ 00:17:55.510 "e3e2d1ad-d9c2-4678-b26e-4eb1e06d93cf" 00:17:55.510 ], 00:17:55.510 "product_name": "Raid Volume", 00:17:55.510 "block_size": 512, 00:17:55.510 "num_blocks": 190464, 00:17:55.510 "uuid": "e3e2d1ad-d9c2-4678-b26e-4eb1e06d93cf", 00:17:55.510 "assigned_rate_limits": { 00:17:55.510 "rw_ios_per_sec": 0, 00:17:55.510 "rw_mbytes_per_sec": 0, 00:17:55.510 "r_mbytes_per_sec": 0, 00:17:55.510 "w_mbytes_per_sec": 0 00:17:55.510 }, 00:17:55.510 "claimed": false, 00:17:55.510 "zoned": false, 00:17:55.510 "supported_io_types": { 00:17:55.510 "read": true, 00:17:55.510 "write": true, 00:17:55.510 "unmap": true, 00:17:55.510 "flush": true, 00:17:55.510 "reset": true, 00:17:55.510 "nvme_admin": false, 00:17:55.510 "nvme_io": false, 00:17:55.510 "nvme_io_md": false, 00:17:55.510 "write_zeroes": true, 00:17:55.510 "zcopy": false, 00:17:55.510 "get_zone_info": false, 00:17:55.510 "zone_management": false, 00:17:55.510 "zone_append": false, 00:17:55.510 "compare": false, 00:17:55.510 "compare_and_write": false, 00:17:55.510 "abort": false, 00:17:55.510 "seek_hole": false, 00:17:55.510 "seek_data": false, 00:17:55.510 "copy": false, 00:17:55.510 "nvme_iov_md": false 00:17:55.510 }, 00:17:55.510 "memory_domains": [ 00:17:55.510 { 00:17:55.510 "dma_device_id": "system", 00:17:55.510 "dma_device_type": 1 00:17:55.510 }, 00:17:55.510 { 00:17:55.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.510 "dma_device_type": 2 00:17:55.510 }, 00:17:55.510 { 00:17:55.510 "dma_device_id": "system", 00:17:55.510 "dma_device_type": 1 00:17:55.510 }, 00:17:55.510 { 00:17:55.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.510 "dma_device_type": 2 00:17:55.510 }, 00:17:55.510 { 00:17:55.510 "dma_device_id": "system", 00:17:55.510 "dma_device_type": 1 00:17:55.510 }, 00:17:55.510 { 00:17:55.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.510 "dma_device_type": 2 00:17:55.510 } 00:17:55.510 ], 00:17:55.510 "driver_specific": { 00:17:55.510 "raid": { 00:17:55.510 "uuid": "e3e2d1ad-d9c2-4678-b26e-4eb1e06d93cf", 00:17:55.510 "strip_size_kb": 64, 00:17:55.510 "state": "online", 00:17:55.510 "raid_level": "raid0", 00:17:55.510 "superblock": true, 00:17:55.510 "num_base_bdevs": 3, 00:17:55.510 "num_base_bdevs_discovered": 3, 00:17:55.510 "num_base_bdevs_operational": 3, 00:17:55.510 "base_bdevs_list": [ 00:17:55.510 { 00:17:55.510 "name": "pt1", 00:17:55.510 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:55.510 "is_configured": true, 00:17:55.510 "data_offset": 2048, 00:17:55.510 "data_size": 63488 00:17:55.510 }, 00:17:55.510 { 00:17:55.510 "name": "pt2", 00:17:55.510 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:55.510 "is_configured": true, 00:17:55.510 "data_offset": 2048, 00:17:55.510 "data_size": 63488 00:17:55.510 }, 00:17:55.510 { 00:17:55.510 "name": "pt3", 00:17:55.510 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:55.510 "is_configured": true, 00:17:55.510 "data_offset": 2048, 00:17:55.510 "data_size": 63488 00:17:55.510 } 00:17:55.510 ] 00:17:55.510 } 00:17:55.510 } 00:17:55.510 }' 00:17:55.510 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:55.768 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:55.768 pt2 00:17:55.768 pt3' 00:17:55.768 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:55.768 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:55.768 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:55.768 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:55.768 "name": "pt1", 00:17:55.768 "aliases": [ 00:17:55.769 "00000000-0000-0000-0000-000000000001" 00:17:55.769 ], 00:17:55.769 "product_name": "passthru", 00:17:55.769 "block_size": 512, 00:17:55.769 "num_blocks": 65536, 00:17:55.769 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:55.769 "assigned_rate_limits": { 00:17:55.769 "rw_ios_per_sec": 0, 00:17:55.769 "rw_mbytes_per_sec": 0, 00:17:55.769 "r_mbytes_per_sec": 0, 00:17:55.769 "w_mbytes_per_sec": 0 00:17:55.769 }, 00:17:55.769 "claimed": true, 00:17:55.769 "claim_type": "exclusive_write", 00:17:55.769 "zoned": false, 00:17:55.769 "supported_io_types": { 00:17:55.769 "read": true, 00:17:55.769 "write": true, 00:17:55.769 "unmap": true, 00:17:55.769 "flush": true, 00:17:55.769 "reset": true, 00:17:55.769 "nvme_admin": false, 00:17:55.769 "nvme_io": false, 00:17:55.769 "nvme_io_md": false, 00:17:55.769 "write_zeroes": true, 00:17:55.769 "zcopy": true, 00:17:55.769 "get_zone_info": false, 00:17:55.769 "zone_management": false, 00:17:55.769 "zone_append": false, 00:17:55.769 "compare": false, 00:17:55.769 "compare_and_write": false, 00:17:55.769 "abort": true, 00:17:55.769 "seek_hole": false, 00:17:55.769 "seek_data": false, 00:17:55.769 "copy": true, 00:17:55.769 "nvme_iov_md": false 00:17:55.769 }, 00:17:55.769 "memory_domains": [ 00:17:55.769 { 00:17:55.769 "dma_device_id": "system", 00:17:55.769 "dma_device_type": 1 00:17:55.769 }, 00:17:55.769 { 00:17:55.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.769 "dma_device_type": 2 00:17:55.769 } 00:17:55.769 ], 00:17:55.769 "driver_specific": { 00:17:55.769 "passthru": { 00:17:55.769 "name": "pt1", 00:17:55.769 "base_bdev_name": "malloc1" 00:17:55.769 } 00:17:55.769 } 00:17:55.769 }' 00:17:55.769 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:56.027 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:56.027 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:56.027 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:56.027 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:56.027 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:56.027 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:56.027 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:56.027 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:56.027 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:56.027 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:56.286 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:56.286 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:56.286 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:56.286 04:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:56.286 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:56.286 "name": "pt2", 00:17:56.286 "aliases": [ 00:17:56.286 "00000000-0000-0000-0000-000000000002" 00:17:56.286 ], 00:17:56.286 "product_name": "passthru", 00:17:56.286 "block_size": 512, 00:17:56.286 "num_blocks": 65536, 00:17:56.286 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.286 "assigned_rate_limits": { 00:17:56.286 "rw_ios_per_sec": 0, 00:17:56.286 "rw_mbytes_per_sec": 0, 00:17:56.286 "r_mbytes_per_sec": 0, 00:17:56.286 "w_mbytes_per_sec": 0 00:17:56.286 }, 00:17:56.286 "claimed": true, 00:17:56.286 "claim_type": "exclusive_write", 00:17:56.286 "zoned": false, 00:17:56.286 "supported_io_types": { 00:17:56.286 "read": true, 00:17:56.286 "write": true, 00:17:56.286 "unmap": true, 00:17:56.286 "flush": true, 00:17:56.286 "reset": true, 00:17:56.286 "nvme_admin": false, 00:17:56.286 "nvme_io": false, 00:17:56.286 "nvme_io_md": false, 00:17:56.286 "write_zeroes": true, 00:17:56.286 "zcopy": true, 00:17:56.286 "get_zone_info": false, 00:17:56.286 "zone_management": false, 00:17:56.286 "zone_append": false, 00:17:56.286 "compare": false, 00:17:56.286 "compare_and_write": false, 00:17:56.286 "abort": true, 00:17:56.286 "seek_hole": false, 00:17:56.286 "seek_data": false, 00:17:56.286 "copy": true, 00:17:56.286 "nvme_iov_md": false 00:17:56.286 }, 00:17:56.286 "memory_domains": [ 00:17:56.286 { 00:17:56.286 "dma_device_id": "system", 00:17:56.286 "dma_device_type": 1 00:17:56.286 }, 00:17:56.286 { 00:17:56.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.286 "dma_device_type": 2 00:17:56.286 } 00:17:56.286 ], 00:17:56.286 "driver_specific": { 00:17:56.286 "passthru": { 00:17:56.286 "name": "pt2", 00:17:56.286 "base_bdev_name": "malloc2" 00:17:56.286 } 00:17:56.286 } 00:17:56.286 }' 00:17:56.286 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:56.545 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:56.545 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:56.545 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:56.545 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:56.545 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:56.545 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:56.545 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:56.545 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:56.545 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:56.804 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:56.804 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:56.804 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:56.804 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:56.804 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:57.063 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:57.063 "name": "pt3", 00:17:57.063 "aliases": [ 00:17:57.063 "00000000-0000-0000-0000-000000000003" 00:17:57.063 ], 00:17:57.063 "product_name": "passthru", 00:17:57.063 "block_size": 512, 00:17:57.063 "num_blocks": 65536, 00:17:57.063 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:57.063 "assigned_rate_limits": { 00:17:57.063 "rw_ios_per_sec": 0, 00:17:57.063 "rw_mbytes_per_sec": 0, 00:17:57.063 "r_mbytes_per_sec": 0, 00:17:57.063 "w_mbytes_per_sec": 0 00:17:57.063 }, 00:17:57.063 "claimed": true, 00:17:57.063 "claim_type": "exclusive_write", 00:17:57.063 "zoned": false, 00:17:57.063 "supported_io_types": { 00:17:57.063 "read": true, 00:17:57.063 "write": true, 00:17:57.063 "unmap": true, 00:17:57.063 "flush": true, 00:17:57.063 "reset": true, 00:17:57.063 "nvme_admin": false, 00:17:57.063 "nvme_io": false, 00:17:57.063 "nvme_io_md": false, 00:17:57.063 "write_zeroes": true, 00:17:57.063 "zcopy": true, 00:17:57.063 "get_zone_info": false, 00:17:57.063 "zone_management": false, 00:17:57.063 "zone_append": false, 00:17:57.063 "compare": false, 00:17:57.063 "compare_and_write": false, 00:17:57.063 "abort": true, 00:17:57.063 "seek_hole": false, 00:17:57.063 "seek_data": false, 00:17:57.063 "copy": true, 00:17:57.063 "nvme_iov_md": false 00:17:57.063 }, 00:17:57.063 "memory_domains": [ 00:17:57.063 { 00:17:57.063 "dma_device_id": "system", 00:17:57.063 "dma_device_type": 1 00:17:57.063 }, 00:17:57.063 { 00:17:57.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.063 "dma_device_type": 2 00:17:57.063 } 00:17:57.063 ], 00:17:57.063 "driver_specific": { 00:17:57.063 "passthru": { 00:17:57.063 "name": "pt3", 00:17:57.063 "base_bdev_name": "malloc3" 00:17:57.063 } 00:17:57.063 } 00:17:57.063 }' 00:17:57.063 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:57.063 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:57.063 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:57.063 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:57.063 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:57.063 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:57.063 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:57.063 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:57.322 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:57.322 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:57.322 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:57.322 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:57.322 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:57.322 04:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:57.581 [2024-07-23 04:13:06.176735] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.581 04:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=e3e2d1ad-d9c2-4678-b26e-4eb1e06d93cf 00:17:57.581 04:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z e3e2d1ad-d9c2-4678-b26e-4eb1e06d93cf ']' 00:17:57.581 04:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:57.840 [2024-07-23 04:13:06.404963] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.840 [2024-07-23 04:13:06.404997] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:57.840 [2024-07-23 04:13:06.405085] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.840 [2024-07-23 04:13:06.405169] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.840 [2024-07-23 04:13:06.405185] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041480 name raid_bdev1, state offline 00:17:57.840 04:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.840 04:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:58.099 04:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:58.099 04:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:58.099 04:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:58.099 04:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:58.099 04:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:58.099 04:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:58.358 04:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:58.358 04:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:58.617 04:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:58.617 04:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:58.877 04:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:58.877 04:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:58.877 04:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:17:58.877 04:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:58.877 04:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:58.877 04:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:58.877 04:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:58.877 04:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:58.877 04:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:58.877 04:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:58.877 04:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:58.877 04:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:17:58.877 04:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:59.136 [2024-07-23 04:13:07.756533] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:59.136 [2024-07-23 04:13:07.758840] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:59.136 [2024-07-23 04:13:07.758904] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:59.136 [2024-07-23 04:13:07.758962] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:59.136 [2024-07-23 04:13:07.759015] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:59.136 [2024-07-23 04:13:07.759044] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:59.136 [2024-07-23 04:13:07.759068] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:59.136 [2024-07-23 04:13:07.759082] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state configuring 00:17:59.136 request: 00:17:59.136 { 00:17:59.136 "name": "raid_bdev1", 00:17:59.136 "raid_level": "raid0", 00:17:59.136 "base_bdevs": [ 00:17:59.136 "malloc1", 00:17:59.136 "malloc2", 00:17:59.136 "malloc3" 00:17:59.136 ], 00:17:59.136 "strip_size_kb": 64, 00:17:59.136 "superblock": false, 00:17:59.136 "method": "bdev_raid_create", 00:17:59.136 "req_id": 1 00:17:59.136 } 00:17:59.136 Got JSON-RPC error response 00:17:59.136 response: 00:17:59.136 { 00:17:59.136 "code": -17, 00:17:59.136 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:59.136 } 00:17:59.136 04:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:17:59.136 04:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:59.136 04:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:59.136 04:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:59.136 04:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.136 04:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:59.395 04:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:59.395 04:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:59.395 04:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:59.654 [2024-07-23 04:13:08.201674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:59.654 [2024-07-23 04:13:08.201734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.654 [2024-07-23 04:13:08.201759] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:17:59.654 [2024-07-23 04:13:08.201774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.654 [2024-07-23 04:13:08.204562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.654 [2024-07-23 04:13:08.204596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:59.654 [2024-07-23 04:13:08.204690] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:59.654 [2024-07-23 04:13:08.204767] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:59.654 pt1 00:17:59.654 04:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:59.654 04:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:59.654 04:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:59.654 04:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:59.654 04:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:59.654 04:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:59.654 04:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:59.654 04:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:59.654 04:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:59.654 04:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:59.654 04:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.654 04:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.912 04:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:59.912 "name": "raid_bdev1", 00:17:59.912 "uuid": "e3e2d1ad-d9c2-4678-b26e-4eb1e06d93cf", 00:17:59.912 "strip_size_kb": 64, 00:17:59.912 "state": "configuring", 00:17:59.912 "raid_level": "raid0", 00:17:59.912 "superblock": true, 00:17:59.912 "num_base_bdevs": 3, 00:17:59.912 "num_base_bdevs_discovered": 1, 00:17:59.912 "num_base_bdevs_operational": 3, 00:17:59.912 "base_bdevs_list": [ 00:17:59.912 { 00:17:59.912 "name": "pt1", 00:17:59.912 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:59.912 "is_configured": true, 00:17:59.912 "data_offset": 2048, 00:17:59.912 "data_size": 63488 00:17:59.912 }, 00:17:59.912 { 00:17:59.912 "name": null, 00:17:59.912 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.912 "is_configured": false, 00:17:59.912 "data_offset": 2048, 00:17:59.912 "data_size": 63488 00:17:59.912 }, 00:17:59.912 { 00:17:59.912 "name": null, 00:17:59.912 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:59.913 "is_configured": false, 00:17:59.913 "data_offset": 2048, 00:17:59.913 "data_size": 63488 00:17:59.913 } 00:17:59.913 ] 00:17:59.913 }' 00:17:59.913 04:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:59.913 04:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.502 04:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:18:00.502 04:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:00.502 [2024-07-23 04:13:09.244486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:00.502 [2024-07-23 04:13:09.244555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.502 [2024-07-23 04:13:09.244584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:18:00.502 [2024-07-23 04:13:09.244600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.502 [2024-07-23 04:13:09.245187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.502 [2024-07-23 04:13:09.245212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:00.502 [2024-07-23 04:13:09.245308] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:00.502 [2024-07-23 04:13:09.245335] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.502 pt2 00:18:00.502 04:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:00.761 [2024-07-23 04:13:09.469119] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:00.761 04:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:18:00.761 04:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:00.761 04:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:00.761 04:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:00.761 04:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:00.761 04:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:00.761 04:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:00.761 04:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:00.761 04:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:00.761 04:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:00.761 04:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.761 04:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.021 04:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:01.021 "name": "raid_bdev1", 00:18:01.021 "uuid": "e3e2d1ad-d9c2-4678-b26e-4eb1e06d93cf", 00:18:01.021 "strip_size_kb": 64, 00:18:01.021 "state": "configuring", 00:18:01.021 "raid_level": "raid0", 00:18:01.021 "superblock": true, 00:18:01.021 "num_base_bdevs": 3, 00:18:01.021 "num_base_bdevs_discovered": 1, 00:18:01.021 "num_base_bdevs_operational": 3, 00:18:01.021 "base_bdevs_list": [ 00:18:01.021 { 00:18:01.021 "name": "pt1", 00:18:01.021 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.021 "is_configured": true, 00:18:01.021 "data_offset": 2048, 00:18:01.021 "data_size": 63488 00:18:01.021 }, 00:18:01.021 { 00:18:01.021 "name": null, 00:18:01.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.021 "is_configured": false, 00:18:01.021 "data_offset": 2048, 00:18:01.021 "data_size": 63488 00:18:01.021 }, 00:18:01.021 { 00:18:01.021 "name": null, 00:18:01.021 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:01.021 "is_configured": false, 00:18:01.021 "data_offset": 2048, 00:18:01.021 "data_size": 63488 00:18:01.021 } 00:18:01.021 ] 00:18:01.021 }' 00:18:01.021 04:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:01.021 04:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.588 04:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:18:01.588 04:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:01.588 04:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:01.847 [2024-07-23 04:13:10.507924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:01.847 [2024-07-23 04:13:10.507997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.847 [2024-07-23 04:13:10.508021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:18:01.847 [2024-07-23 04:13:10.508039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.847 [2024-07-23 04:13:10.508616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.847 [2024-07-23 04:13:10.508645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:01.847 [2024-07-23 04:13:10.508735] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:01.847 [2024-07-23 04:13:10.508765] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:01.847 pt2 00:18:01.847 04:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:01.847 04:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:01.847 04:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:02.106 [2024-07-23 04:13:10.736504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:02.106 [2024-07-23 04:13:10.736562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.106 [2024-07-23 04:13:10.736584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042f80 00:18:02.106 [2024-07-23 04:13:10.736602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.106 [2024-07-23 04:13:10.737133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.106 [2024-07-23 04:13:10.737178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:02.106 [2024-07-23 04:13:10.737297] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:02.106 [2024-07-23 04:13:10.737339] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:02.107 [2024-07-23 04:13:10.737512] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042680 00:18:02.107 [2024-07-23 04:13:10.737531] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:02.107 [2024-07-23 04:13:10.737822] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:18:02.107 [2024-07-23 04:13:10.738055] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042680 00:18:02.107 [2024-07-23 04:13:10.738069] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042680 00:18:02.107 [2024-07-23 04:13:10.738255] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.107 pt3 00:18:02.107 04:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:02.107 04:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:02.107 04:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:02.107 04:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:02.107 04:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:02.107 04:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:02.107 04:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:02.107 04:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:02.107 04:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:02.107 04:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:02.107 04:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:02.107 04:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:02.107 04:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.107 04:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.369 04:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:02.369 "name": "raid_bdev1", 00:18:02.369 "uuid": "e3e2d1ad-d9c2-4678-b26e-4eb1e06d93cf", 00:18:02.369 "strip_size_kb": 64, 00:18:02.369 "state": "online", 00:18:02.369 "raid_level": "raid0", 00:18:02.369 "superblock": true, 00:18:02.369 "num_base_bdevs": 3, 00:18:02.369 "num_base_bdevs_discovered": 3, 00:18:02.369 "num_base_bdevs_operational": 3, 00:18:02.369 "base_bdevs_list": [ 00:18:02.369 { 00:18:02.369 "name": "pt1", 00:18:02.369 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:02.369 "is_configured": true, 00:18:02.369 "data_offset": 2048, 00:18:02.369 "data_size": 63488 00:18:02.369 }, 00:18:02.369 { 00:18:02.369 "name": "pt2", 00:18:02.369 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.369 "is_configured": true, 00:18:02.369 "data_offset": 2048, 00:18:02.369 "data_size": 63488 00:18:02.369 }, 00:18:02.369 { 00:18:02.369 "name": "pt3", 00:18:02.369 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:02.369 "is_configured": true, 00:18:02.369 "data_offset": 2048, 00:18:02.369 "data_size": 63488 00:18:02.369 } 00:18:02.369 ] 00:18:02.369 }' 00:18:02.369 04:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:02.369 04:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.937 04:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:18:02.937 04:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:02.937 04:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:02.937 04:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:02.937 04:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:02.937 04:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:02.937 04:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:02.937 04:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:03.197 [2024-07-23 04:13:11.747596] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.197 04:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:03.197 "name": "raid_bdev1", 00:18:03.197 "aliases": [ 00:18:03.197 "e3e2d1ad-d9c2-4678-b26e-4eb1e06d93cf" 00:18:03.197 ], 00:18:03.197 "product_name": "Raid Volume", 00:18:03.197 "block_size": 512, 00:18:03.197 "num_blocks": 190464, 00:18:03.197 "uuid": "e3e2d1ad-d9c2-4678-b26e-4eb1e06d93cf", 00:18:03.197 "assigned_rate_limits": { 00:18:03.197 "rw_ios_per_sec": 0, 00:18:03.197 "rw_mbytes_per_sec": 0, 00:18:03.197 "r_mbytes_per_sec": 0, 00:18:03.197 "w_mbytes_per_sec": 0 00:18:03.197 }, 00:18:03.197 "claimed": false, 00:18:03.197 "zoned": false, 00:18:03.197 "supported_io_types": { 00:18:03.197 "read": true, 00:18:03.197 "write": true, 00:18:03.197 "unmap": true, 00:18:03.197 "flush": true, 00:18:03.197 "reset": true, 00:18:03.197 "nvme_admin": false, 00:18:03.197 "nvme_io": false, 00:18:03.197 "nvme_io_md": false, 00:18:03.197 "write_zeroes": true, 00:18:03.197 "zcopy": false, 00:18:03.197 "get_zone_info": false, 00:18:03.197 "zone_management": false, 00:18:03.197 "zone_append": false, 00:18:03.197 "compare": false, 00:18:03.197 "compare_and_write": false, 00:18:03.197 "abort": false, 00:18:03.197 "seek_hole": false, 00:18:03.197 "seek_data": false, 00:18:03.197 "copy": false, 00:18:03.197 "nvme_iov_md": false 00:18:03.197 }, 00:18:03.197 "memory_domains": [ 00:18:03.197 { 00:18:03.197 "dma_device_id": "system", 00:18:03.197 "dma_device_type": 1 00:18:03.197 }, 00:18:03.197 { 00:18:03.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.197 "dma_device_type": 2 00:18:03.197 }, 00:18:03.197 { 00:18:03.197 "dma_device_id": "system", 00:18:03.197 "dma_device_type": 1 00:18:03.197 }, 00:18:03.197 { 00:18:03.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.197 "dma_device_type": 2 00:18:03.197 }, 00:18:03.197 { 00:18:03.197 "dma_device_id": "system", 00:18:03.197 "dma_device_type": 1 00:18:03.197 }, 00:18:03.197 { 00:18:03.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.197 "dma_device_type": 2 00:18:03.197 } 00:18:03.197 ], 00:18:03.197 "driver_specific": { 00:18:03.197 "raid": { 00:18:03.197 "uuid": "e3e2d1ad-d9c2-4678-b26e-4eb1e06d93cf", 00:18:03.197 "strip_size_kb": 64, 00:18:03.197 "state": "online", 00:18:03.197 "raid_level": "raid0", 00:18:03.197 "superblock": true, 00:18:03.197 "num_base_bdevs": 3, 00:18:03.197 "num_base_bdevs_discovered": 3, 00:18:03.197 "num_base_bdevs_operational": 3, 00:18:03.197 "base_bdevs_list": [ 00:18:03.197 { 00:18:03.197 "name": "pt1", 00:18:03.197 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:03.197 "is_configured": true, 00:18:03.197 "data_offset": 2048, 00:18:03.197 "data_size": 63488 00:18:03.197 }, 00:18:03.197 { 00:18:03.197 "name": "pt2", 00:18:03.197 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.197 "is_configured": true, 00:18:03.197 "data_offset": 2048, 00:18:03.197 "data_size": 63488 00:18:03.197 }, 00:18:03.197 { 00:18:03.197 "name": "pt3", 00:18:03.197 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:03.197 "is_configured": true, 00:18:03.197 "data_offset": 2048, 00:18:03.197 "data_size": 63488 00:18:03.197 } 00:18:03.197 ] 00:18:03.197 } 00:18:03.197 } 00:18:03.197 }' 00:18:03.197 04:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:03.197 04:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:03.197 pt2 00:18:03.197 pt3' 00:18:03.197 04:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:03.197 04:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:03.197 04:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:03.457 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:03.457 "name": "pt1", 00:18:03.457 "aliases": [ 00:18:03.457 "00000000-0000-0000-0000-000000000001" 00:18:03.457 ], 00:18:03.457 "product_name": "passthru", 00:18:03.457 "block_size": 512, 00:18:03.457 "num_blocks": 65536, 00:18:03.457 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:03.457 "assigned_rate_limits": { 00:18:03.457 "rw_ios_per_sec": 0, 00:18:03.457 "rw_mbytes_per_sec": 0, 00:18:03.457 "r_mbytes_per_sec": 0, 00:18:03.457 "w_mbytes_per_sec": 0 00:18:03.457 }, 00:18:03.457 "claimed": true, 00:18:03.457 "claim_type": "exclusive_write", 00:18:03.457 "zoned": false, 00:18:03.457 "supported_io_types": { 00:18:03.457 "read": true, 00:18:03.457 "write": true, 00:18:03.457 "unmap": true, 00:18:03.457 "flush": true, 00:18:03.457 "reset": true, 00:18:03.457 "nvme_admin": false, 00:18:03.457 "nvme_io": false, 00:18:03.457 "nvme_io_md": false, 00:18:03.457 "write_zeroes": true, 00:18:03.457 "zcopy": true, 00:18:03.457 "get_zone_info": false, 00:18:03.457 "zone_management": false, 00:18:03.457 "zone_append": false, 00:18:03.457 "compare": false, 00:18:03.457 "compare_and_write": false, 00:18:03.457 "abort": true, 00:18:03.457 "seek_hole": false, 00:18:03.457 "seek_data": false, 00:18:03.457 "copy": true, 00:18:03.457 "nvme_iov_md": false 00:18:03.457 }, 00:18:03.457 "memory_domains": [ 00:18:03.457 { 00:18:03.457 "dma_device_id": "system", 00:18:03.457 "dma_device_type": 1 00:18:03.457 }, 00:18:03.457 { 00:18:03.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.457 "dma_device_type": 2 00:18:03.457 } 00:18:03.457 ], 00:18:03.457 "driver_specific": { 00:18:03.457 "passthru": { 00:18:03.457 "name": "pt1", 00:18:03.457 "base_bdev_name": "malloc1" 00:18:03.457 } 00:18:03.457 } 00:18:03.457 }' 00:18:03.457 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:03.457 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:03.457 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:03.457 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:03.457 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:03.457 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:03.457 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:03.716 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:03.716 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:03.716 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:03.716 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:03.716 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:03.716 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:03.716 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:03.716 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:03.976 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:03.976 "name": "pt2", 00:18:03.976 "aliases": [ 00:18:03.976 "00000000-0000-0000-0000-000000000002" 00:18:03.976 ], 00:18:03.976 "product_name": "passthru", 00:18:03.976 "block_size": 512, 00:18:03.976 "num_blocks": 65536, 00:18:03.976 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.976 "assigned_rate_limits": { 00:18:03.976 "rw_ios_per_sec": 0, 00:18:03.976 "rw_mbytes_per_sec": 0, 00:18:03.976 "r_mbytes_per_sec": 0, 00:18:03.976 "w_mbytes_per_sec": 0 00:18:03.976 }, 00:18:03.976 "claimed": true, 00:18:03.976 "claim_type": "exclusive_write", 00:18:03.976 "zoned": false, 00:18:03.976 "supported_io_types": { 00:18:03.976 "read": true, 00:18:03.976 "write": true, 00:18:03.976 "unmap": true, 00:18:03.976 "flush": true, 00:18:03.976 "reset": true, 00:18:03.976 "nvme_admin": false, 00:18:03.976 "nvme_io": false, 00:18:03.976 "nvme_io_md": false, 00:18:03.976 "write_zeroes": true, 00:18:03.976 "zcopy": true, 00:18:03.976 "get_zone_info": false, 00:18:03.976 "zone_management": false, 00:18:03.976 "zone_append": false, 00:18:03.976 "compare": false, 00:18:03.976 "compare_and_write": false, 00:18:03.976 "abort": true, 00:18:03.976 "seek_hole": false, 00:18:03.976 "seek_data": false, 00:18:03.976 "copy": true, 00:18:03.976 "nvme_iov_md": false 00:18:03.976 }, 00:18:03.976 "memory_domains": [ 00:18:03.976 { 00:18:03.976 "dma_device_id": "system", 00:18:03.976 "dma_device_type": 1 00:18:03.976 }, 00:18:03.976 { 00:18:03.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.976 "dma_device_type": 2 00:18:03.976 } 00:18:03.976 ], 00:18:03.976 "driver_specific": { 00:18:03.976 "passthru": { 00:18:03.976 "name": "pt2", 00:18:03.976 "base_bdev_name": "malloc2" 00:18:03.976 } 00:18:03.976 } 00:18:03.976 }' 00:18:03.976 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:03.976 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:03.976 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:03.976 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:03.976 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:04.235 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:04.235 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:04.235 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:04.235 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:04.235 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:04.235 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:04.235 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:04.235 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:04.235 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:04.235 04:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:04.494 04:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:04.494 "name": "pt3", 00:18:04.494 "aliases": [ 00:18:04.494 "00000000-0000-0000-0000-000000000003" 00:18:04.494 ], 00:18:04.494 "product_name": "passthru", 00:18:04.494 "block_size": 512, 00:18:04.494 "num_blocks": 65536, 00:18:04.494 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:04.494 "assigned_rate_limits": { 00:18:04.494 "rw_ios_per_sec": 0, 00:18:04.494 "rw_mbytes_per_sec": 0, 00:18:04.494 "r_mbytes_per_sec": 0, 00:18:04.494 "w_mbytes_per_sec": 0 00:18:04.494 }, 00:18:04.494 "claimed": true, 00:18:04.494 "claim_type": "exclusive_write", 00:18:04.494 "zoned": false, 00:18:04.494 "supported_io_types": { 00:18:04.494 "read": true, 00:18:04.494 "write": true, 00:18:04.495 "unmap": true, 00:18:04.495 "flush": true, 00:18:04.495 "reset": true, 00:18:04.495 "nvme_admin": false, 00:18:04.495 "nvme_io": false, 00:18:04.495 "nvme_io_md": false, 00:18:04.495 "write_zeroes": true, 00:18:04.495 "zcopy": true, 00:18:04.495 "get_zone_info": false, 00:18:04.495 "zone_management": false, 00:18:04.495 "zone_append": false, 00:18:04.495 "compare": false, 00:18:04.495 "compare_and_write": false, 00:18:04.495 "abort": true, 00:18:04.495 "seek_hole": false, 00:18:04.495 "seek_data": false, 00:18:04.495 "copy": true, 00:18:04.495 "nvme_iov_md": false 00:18:04.495 }, 00:18:04.495 "memory_domains": [ 00:18:04.495 { 00:18:04.495 "dma_device_id": "system", 00:18:04.495 "dma_device_type": 1 00:18:04.495 }, 00:18:04.495 { 00:18:04.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.495 "dma_device_type": 2 00:18:04.495 } 00:18:04.495 ], 00:18:04.495 "driver_specific": { 00:18:04.495 "passthru": { 00:18:04.495 "name": "pt3", 00:18:04.495 "base_bdev_name": "malloc3" 00:18:04.495 } 00:18:04.495 } 00:18:04.495 }' 00:18:04.495 04:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:04.495 04:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:04.495 04:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:04.495 04:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:04.754 04:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:04.754 04:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:04.754 04:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:04.754 04:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:04.754 04:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:04.754 04:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:04.754 04:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:04.754 04:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:04.754 04:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:04.754 04:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:18:05.014 [2024-07-23 04:13:13.708910] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:05.014 04:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' e3e2d1ad-d9c2-4678-b26e-4eb1e06d93cf '!=' e3e2d1ad-d9c2-4678-b26e-4eb1e06d93cf ']' 00:18:05.014 04:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:18:05.014 04:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:05.014 04:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:05.014 04:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2655686 00:18:05.014 04:13:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2655686 ']' 00:18:05.014 04:13:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2655686 00:18:05.014 04:13:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:18:05.014 04:13:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:05.014 04:13:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2655686 00:18:05.014 04:13:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:05.014 04:13:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:05.014 04:13:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2655686' 00:18:05.014 killing process with pid 2655686 00:18:05.014 04:13:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2655686 00:18:05.014 [2024-07-23 04:13:13.786858] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:05.014 [2024-07-23 04:13:13.786964] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:05.014 04:13:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2655686 00:18:05.014 [2024-07-23 04:13:13.787039] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:05.014 [2024-07-23 04:13:13.787059] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state offline 00:18:05.583 [2024-07-23 04:13:14.112422] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:07.493 04:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:18:07.493 00:18:07.493 real 0m15.358s 00:18:07.493 user 0m25.790s 00:18:07.493 sys 0m2.615s 00:18:07.493 04:13:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:07.493 04:13:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.493 ************************************ 00:18:07.493 END TEST raid_superblock_test 00:18:07.493 ************************************ 00:18:07.493 04:13:15 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:07.493 04:13:15 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:18:07.493 04:13:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:07.493 04:13:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:07.493 04:13:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:07.493 ************************************ 00:18:07.493 START TEST raid_read_error_test 00:18:07.493 ************************************ 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 read 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.kPc08S9xjz 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2658610 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2658610 /var/tmp/spdk-raid.sock 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2658610 ']' 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:07.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:07.493 04:13:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.493 [2024-07-23 04:13:16.005416] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:07.493 [2024-07-23 04:13:16.005538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2658610 ] 00:18:07.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.493 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:07.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.493 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:07.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.493 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:07.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.493 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:07.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.493 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:07.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.493 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:07.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.493 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:07.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.493 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:07.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.493 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:07.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.493 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:07.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.494 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:07.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.494 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:07.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.494 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:07.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.494 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:07.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.494 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:07.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.494 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:07.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.494 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:07.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.494 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:07.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.494 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:07.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.494 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:07.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.494 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:07.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.494 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:07.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.494 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:07.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.494 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:07.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.494 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:07.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.494 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:07.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.494 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:07.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.494 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:07.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.494 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:07.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.494 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:07.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.494 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:07.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:07.494 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:07.494 [2024-07-23 04:13:16.230804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.753 [2024-07-23 04:13:16.509606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.323 [2024-07-23 04:13:16.825488] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.323 [2024-07-23 04:13:16.825522] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.323 04:13:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:08.323 04:13:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:18:08.323 04:13:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:08.323 04:13:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:08.891 BaseBdev1_malloc 00:18:08.891 04:13:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:09.150 true 00:18:09.150 04:13:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:09.719 [2024-07-23 04:13:18.258675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:09.719 [2024-07-23 04:13:18.258737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.719 [2024-07-23 04:13:18.258766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:18:09.719 [2024-07-23 04:13:18.258788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.719 [2024-07-23 04:13:18.261592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.719 [2024-07-23 04:13:18.261631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:09.719 BaseBdev1 00:18:09.719 04:13:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:09.719 04:13:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:09.979 BaseBdev2_malloc 00:18:09.979 04:13:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:10.548 true 00:18:10.548 04:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:10.548 [2024-07-23 04:13:19.293549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:10.548 [2024-07-23 04:13:19.293609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.548 [2024-07-23 04:13:19.293636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:18:10.548 [2024-07-23 04:13:19.293657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.548 [2024-07-23 04:13:19.296445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.548 [2024-07-23 04:13:19.296492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:10.548 BaseBdev2 00:18:10.548 04:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:10.548 04:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:11.117 BaseBdev3_malloc 00:18:11.117 04:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:11.376 true 00:18:11.376 04:13:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:11.635 [2024-07-23 04:13:20.294336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:11.635 [2024-07-23 04:13:20.294397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.635 [2024-07-23 04:13:20.294424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:18:11.635 [2024-07-23 04:13:20.294442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.635 [2024-07-23 04:13:20.297263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.635 [2024-07-23 04:13:20.297300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:11.635 BaseBdev3 00:18:11.635 04:13:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:18:12.205 [2024-07-23 04:13:20.791702] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:12.205 [2024-07-23 04:13:20.794077] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:12.205 [2024-07-23 04:13:20.794178] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:12.205 [2024-07-23 04:13:20.794471] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041d80 00:18:12.205 [2024-07-23 04:13:20.794489] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:12.205 [2024-07-23 04:13:20.794823] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:18:12.205 [2024-07-23 04:13:20.795092] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041d80 00:18:12.205 [2024-07-23 04:13:20.795117] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041d80 00:18:12.205 [2024-07-23 04:13:20.795327] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.205 04:13:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:12.205 04:13:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:12.205 04:13:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:12.205 04:13:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:12.205 04:13:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:12.205 04:13:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:12.205 04:13:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:12.206 04:13:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:12.206 04:13:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:12.206 04:13:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:12.206 04:13:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.206 04:13:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.465 04:13:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:12.465 "name": "raid_bdev1", 00:18:12.465 "uuid": "6dd2edc6-d839-4c9e-b84e-84a88ba08568", 00:18:12.465 "strip_size_kb": 64, 00:18:12.465 "state": "online", 00:18:12.465 "raid_level": "raid0", 00:18:12.465 "superblock": true, 00:18:12.465 "num_base_bdevs": 3, 00:18:12.465 "num_base_bdevs_discovered": 3, 00:18:12.465 "num_base_bdevs_operational": 3, 00:18:12.465 "base_bdevs_list": [ 00:18:12.465 { 00:18:12.465 "name": "BaseBdev1", 00:18:12.465 "uuid": "18d4356b-22f6-5c29-9633-ccf2a4bce7f6", 00:18:12.465 "is_configured": true, 00:18:12.465 "data_offset": 2048, 00:18:12.465 "data_size": 63488 00:18:12.465 }, 00:18:12.465 { 00:18:12.465 "name": "BaseBdev2", 00:18:12.465 "uuid": "f623f33b-ee55-52dc-abfe-00497c498acc", 00:18:12.465 "is_configured": true, 00:18:12.465 "data_offset": 2048, 00:18:12.465 "data_size": 63488 00:18:12.465 }, 00:18:12.465 { 00:18:12.465 "name": "BaseBdev3", 00:18:12.465 "uuid": "9ae2ba14-c7a6-5be8-a712-7d9004175b66", 00:18:12.465 "is_configured": true, 00:18:12.465 "data_offset": 2048, 00:18:12.465 "data_size": 63488 00:18:12.465 } 00:18:12.465 ] 00:18:12.465 }' 00:18:12.465 04:13:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:12.465 04:13:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.033 04:13:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:13.033 04:13:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:13.033 [2024-07-23 04:13:21.728092] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:18:13.970 04:13:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:14.539 04:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:14.539 04:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:14.539 04:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:18:14.539 04:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:14.539 04:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:14.539 04:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:14.539 04:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:14.539 04:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:14.539 04:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:14.539 04:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:14.539 04:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:14.539 04:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:14.539 04:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:14.539 04:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.539 04:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.797 04:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:14.797 "name": "raid_bdev1", 00:18:14.797 "uuid": "6dd2edc6-d839-4c9e-b84e-84a88ba08568", 00:18:14.797 "strip_size_kb": 64, 00:18:14.797 "state": "online", 00:18:14.797 "raid_level": "raid0", 00:18:14.797 "superblock": true, 00:18:14.797 "num_base_bdevs": 3, 00:18:14.797 "num_base_bdevs_discovered": 3, 00:18:14.797 "num_base_bdevs_operational": 3, 00:18:14.797 "base_bdevs_list": [ 00:18:14.797 { 00:18:14.797 "name": "BaseBdev1", 00:18:14.797 "uuid": "18d4356b-22f6-5c29-9633-ccf2a4bce7f6", 00:18:14.797 "is_configured": true, 00:18:14.797 "data_offset": 2048, 00:18:14.797 "data_size": 63488 00:18:14.797 }, 00:18:14.797 { 00:18:14.797 "name": "BaseBdev2", 00:18:14.797 "uuid": "f623f33b-ee55-52dc-abfe-00497c498acc", 00:18:14.797 "is_configured": true, 00:18:14.797 "data_offset": 2048, 00:18:14.797 "data_size": 63488 00:18:14.797 }, 00:18:14.797 { 00:18:14.797 "name": "BaseBdev3", 00:18:14.797 "uuid": "9ae2ba14-c7a6-5be8-a712-7d9004175b66", 00:18:14.797 "is_configured": true, 00:18:14.797 "data_offset": 2048, 00:18:14.797 "data_size": 63488 00:18:14.797 } 00:18:14.797 ] 00:18:14.797 }' 00:18:14.797 04:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:14.797 04:13:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.363 04:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:15.930 [2024-07-23 04:13:24.422492] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:15.930 [2024-07-23 04:13:24.422533] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:15.930 [2024-07-23 04:13:24.425861] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.930 [2024-07-23 04:13:24.425908] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.930 [2024-07-23 04:13:24.425957] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:15.930 [2024-07-23 04:13:24.425973] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041d80 name raid_bdev1, state offline 00:18:15.930 0 00:18:15.930 04:13:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2658610 00:18:15.930 04:13:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2658610 ']' 00:18:15.930 04:13:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2658610 00:18:15.930 04:13:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:18:15.930 04:13:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:15.930 04:13:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2658610 00:18:15.930 04:13:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:15.930 04:13:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:15.930 04:13:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2658610' 00:18:15.930 killing process with pid 2658610 00:18:15.930 04:13:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2658610 00:18:15.930 [2024-07-23 04:13:24.509613] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:15.930 04:13:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2658610 00:18:16.187 [2024-07-23 04:13:24.754574] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:18.089 04:13:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.kPc08S9xjz 00:18:18.089 04:13:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:18.089 04:13:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:18.089 04:13:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.37 00:18:18.089 04:13:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:18:18.089 04:13:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:18.089 04:13:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:18.089 04:13:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.37 != \0\.\0\0 ]] 00:18:18.089 00:18:18.089 real 0m10.737s 00:18:18.089 user 0m16.044s 00:18:18.089 sys 0m1.594s 00:18:18.089 04:13:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:18.089 04:13:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.089 ************************************ 00:18:18.089 END TEST raid_read_error_test 00:18:18.089 ************************************ 00:18:18.089 04:13:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:18.089 04:13:26 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:18:18.089 04:13:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:18.089 04:13:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:18.089 04:13:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:18.089 ************************************ 00:18:18.089 START TEST raid_write_error_test 00:18:18.089 ************************************ 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 write 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.dII1IW53Y9 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2660547 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2660547 /var/tmp/spdk-raid.sock 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2660547 ']' 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:18.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.089 04:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:18.089 [2024-07-23 04:13:26.827668] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:18.090 [2024-07-23 04:13:26.827792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2660547 ] 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:18.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:18.348 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:18.348 [2024-07-23 04:13:27.056319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.606 [2024-07-23 04:13:27.315207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.173 [2024-07-23 04:13:27.651411] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.173 [2024-07-23 04:13:27.651446] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.431 04:13:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:19.431 04:13:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:18:19.431 04:13:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:19.431 04:13:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:19.997 BaseBdev1_malloc 00:18:19.997 04:13:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:19.997 true 00:18:19.997 04:13:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:20.565 [2024-07-23 04:13:29.236750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:20.565 [2024-07-23 04:13:29.236815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.565 [2024-07-23 04:13:29.236843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:18:20.565 [2024-07-23 04:13:29.236866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.565 [2024-07-23 04:13:29.239676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.565 [2024-07-23 04:13:29.239715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:20.565 BaseBdev1 00:18:20.565 04:13:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:20.565 04:13:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:20.824 BaseBdev2_malloc 00:18:20.824 04:13:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:21.392 true 00:18:21.392 04:13:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:21.651 [2024-07-23 04:13:30.272440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:21.651 [2024-07-23 04:13:30.272503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.651 [2024-07-23 04:13:30.272529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:18:21.651 [2024-07-23 04:13:30.272551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.651 [2024-07-23 04:13:30.275350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.651 [2024-07-23 04:13:30.275387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:21.651 BaseBdev2 00:18:21.651 04:13:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:21.651 04:13:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:22.218 BaseBdev3_malloc 00:18:22.218 04:13:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:22.477 true 00:18:22.477 04:13:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:23.045 [2024-07-23 04:13:31.552824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:23.045 [2024-07-23 04:13:31.552883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.045 [2024-07-23 04:13:31.552911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:18:23.045 [2024-07-23 04:13:31.552929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.045 [2024-07-23 04:13:31.555738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.045 [2024-07-23 04:13:31.555777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:23.045 BaseBdev3 00:18:23.045 04:13:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:18:23.045 [2024-07-23 04:13:31.789577] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:23.045 [2024-07-23 04:13:31.791939] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:23.045 [2024-07-23 04:13:31.792029] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:23.045 [2024-07-23 04:13:31.792630] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041d80 00:18:23.045 [2024-07-23 04:13:31.792651] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:23.045 [2024-07-23 04:13:31.792987] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:18:23.045 [2024-07-23 04:13:31.793256] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041d80 00:18:23.045 [2024-07-23 04:13:31.793284] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041d80 00:18:23.045 [2024-07-23 04:13:31.793507] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.045 04:13:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:23.045 04:13:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:23.045 04:13:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:23.045 04:13:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:23.045 04:13:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:23.045 04:13:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:23.045 04:13:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:23.045 04:13:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:23.045 04:13:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:23.045 04:13:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:23.045 04:13:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.045 04:13:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.304 04:13:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:23.304 "name": "raid_bdev1", 00:18:23.304 "uuid": "e95c96c2-7959-4b77-92cd-9c8fe229a182", 00:18:23.304 "strip_size_kb": 64, 00:18:23.304 "state": "online", 00:18:23.304 "raid_level": "raid0", 00:18:23.304 "superblock": true, 00:18:23.304 "num_base_bdevs": 3, 00:18:23.304 "num_base_bdevs_discovered": 3, 00:18:23.304 "num_base_bdevs_operational": 3, 00:18:23.304 "base_bdevs_list": [ 00:18:23.304 { 00:18:23.304 "name": "BaseBdev1", 00:18:23.304 "uuid": "1e2f7b56-cefb-5743-b98c-717f1f244375", 00:18:23.304 "is_configured": true, 00:18:23.304 "data_offset": 2048, 00:18:23.304 "data_size": 63488 00:18:23.304 }, 00:18:23.304 { 00:18:23.304 "name": "BaseBdev2", 00:18:23.304 "uuid": "e076c6c5-dfcc-5822-88c6-2fdf850a4d64", 00:18:23.304 "is_configured": true, 00:18:23.304 "data_offset": 2048, 00:18:23.304 "data_size": 63488 00:18:23.304 }, 00:18:23.304 { 00:18:23.304 "name": "BaseBdev3", 00:18:23.304 "uuid": "116e7f0a-987a-5f11-90c0-ec967b05ca78", 00:18:23.304 "is_configured": true, 00:18:23.304 "data_offset": 2048, 00:18:23.304 "data_size": 63488 00:18:23.304 } 00:18:23.304 ] 00:18:23.304 }' 00:18:23.304 04:13:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:23.304 04:13:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.872 04:13:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:23.872 04:13:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:24.131 [2024-07-23 04:13:32.702152] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:18:25.071 04:13:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:25.639 04:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:25.639 04:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:25.639 04:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:18:25.639 04:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:25.639 04:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:25.639 04:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:25.639 04:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:25.639 04:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:25.639 04:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:25.639 04:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:25.639 04:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:25.639 04:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:25.639 04:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:25.639 04:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.639 04:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.639 04:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:25.639 "name": "raid_bdev1", 00:18:25.639 "uuid": "e95c96c2-7959-4b77-92cd-9c8fe229a182", 00:18:25.639 "strip_size_kb": 64, 00:18:25.639 "state": "online", 00:18:25.639 "raid_level": "raid0", 00:18:25.639 "superblock": true, 00:18:25.639 "num_base_bdevs": 3, 00:18:25.639 "num_base_bdevs_discovered": 3, 00:18:25.639 "num_base_bdevs_operational": 3, 00:18:25.639 "base_bdevs_list": [ 00:18:25.639 { 00:18:25.639 "name": "BaseBdev1", 00:18:25.639 "uuid": "1e2f7b56-cefb-5743-b98c-717f1f244375", 00:18:25.639 "is_configured": true, 00:18:25.639 "data_offset": 2048, 00:18:25.639 "data_size": 63488 00:18:25.639 }, 00:18:25.639 { 00:18:25.639 "name": "BaseBdev2", 00:18:25.639 "uuid": "e076c6c5-dfcc-5822-88c6-2fdf850a4d64", 00:18:25.639 "is_configured": true, 00:18:25.639 "data_offset": 2048, 00:18:25.639 "data_size": 63488 00:18:25.639 }, 00:18:25.639 { 00:18:25.639 "name": "BaseBdev3", 00:18:25.639 "uuid": "116e7f0a-987a-5f11-90c0-ec967b05ca78", 00:18:25.639 "is_configured": true, 00:18:25.639 "data_offset": 2048, 00:18:25.639 "data_size": 63488 00:18:25.639 } 00:18:25.639 ] 00:18:25.639 }' 00:18:25.639 04:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:25.639 04:13:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.207 04:13:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:26.466 [2024-07-23 04:13:35.056064] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:26.466 [2024-07-23 04:13:35.056107] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:26.466 [2024-07-23 04:13:35.059391] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:26.466 [2024-07-23 04:13:35.059440] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.466 [2024-07-23 04:13:35.059488] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:26.466 [2024-07-23 04:13:35.059503] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041d80 name raid_bdev1, state offline 00:18:26.466 0 00:18:26.466 04:13:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2660547 00:18:26.466 04:13:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2660547 ']' 00:18:26.466 04:13:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2660547 00:18:26.466 04:13:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:18:26.466 04:13:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:26.466 04:13:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2660547 00:18:26.466 04:13:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:26.466 04:13:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:26.466 04:13:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2660547' 00:18:26.466 killing process with pid 2660547 00:18:26.466 04:13:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2660547 00:18:26.466 [2024-07-23 04:13:35.128347] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:26.466 04:13:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2660547 00:18:26.725 [2024-07-23 04:13:35.356337] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:28.704 04:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.dII1IW53Y9 00:18:28.704 04:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:28.704 04:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:28.704 04:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:18:28.704 04:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:18:28.704 04:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:28.704 04:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:28.704 04:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:18:28.704 00:18:28.704 real 0m10.413s 00:18:28.704 user 0m15.725s 00:18:28.704 sys 0m1.520s 00:18:28.704 04:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:28.704 04:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.704 ************************************ 00:18:28.704 END TEST raid_write_error_test 00:18:28.704 ************************************ 00:18:28.704 04:13:37 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:28.704 04:13:37 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:18:28.704 04:13:37 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:18:28.704 04:13:37 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:28.704 04:13:37 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:28.704 04:13:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:28.704 ************************************ 00:18:28.704 START TEST raid_state_function_test 00:18:28.704 ************************************ 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 false 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2662321 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2662321' 00:18:28.704 Process raid pid: 2662321 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2662321 /var/tmp/spdk-raid.sock 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2662321 ']' 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:28.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.704 04:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:28.704 [2024-07-23 04:13:37.324225] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:28.704 [2024-07-23 04:13:37.324349] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:28.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.704 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:28.705 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.705 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:28.705 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.705 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:28.705 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.705 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:28.705 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.705 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:28.705 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.705 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:28.705 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.705 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:28.705 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:28.705 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:28.964 [2024-07-23 04:13:37.553969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.224 [2024-07-23 04:13:37.842695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.483 [2024-07-23 04:13:38.198925] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:29.483 [2024-07-23 04:13:38.198961] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:29.742 04:13:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:29.742 04:13:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:18:29.742 04:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:30.001 [2024-07-23 04:13:38.668654] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:30.001 [2024-07-23 04:13:38.668711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:30.001 [2024-07-23 04:13:38.668727] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:30.001 [2024-07-23 04:13:38.668743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:30.001 [2024-07-23 04:13:38.668754] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:30.001 [2024-07-23 04:13:38.668770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:30.001 04:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:30.001 04:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:30.001 04:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:30.001 04:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:30.001 04:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:30.001 04:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:30.001 04:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:30.001 04:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:30.001 04:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:30.001 04:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:30.001 04:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.001 04:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.261 04:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:30.261 "name": "Existed_Raid", 00:18:30.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.261 "strip_size_kb": 64, 00:18:30.261 "state": "configuring", 00:18:30.261 "raid_level": "concat", 00:18:30.261 "superblock": false, 00:18:30.261 "num_base_bdevs": 3, 00:18:30.261 "num_base_bdevs_discovered": 0, 00:18:30.261 "num_base_bdevs_operational": 3, 00:18:30.261 "base_bdevs_list": [ 00:18:30.261 { 00:18:30.261 "name": "BaseBdev1", 00:18:30.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.261 "is_configured": false, 00:18:30.261 "data_offset": 0, 00:18:30.261 "data_size": 0 00:18:30.261 }, 00:18:30.261 { 00:18:30.261 "name": "BaseBdev2", 00:18:30.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.261 "is_configured": false, 00:18:30.261 "data_offset": 0, 00:18:30.261 "data_size": 0 00:18:30.261 }, 00:18:30.261 { 00:18:30.261 "name": "BaseBdev3", 00:18:30.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.261 "is_configured": false, 00:18:30.261 "data_offset": 0, 00:18:30.261 "data_size": 0 00:18:30.261 } 00:18:30.261 ] 00:18:30.261 }' 00:18:30.261 04:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:30.261 04:13:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.829 04:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:31.088 [2024-07-23 04:13:39.631073] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:31.088 [2024-07-23 04:13:39.631111] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:18:31.088 04:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:31.088 [2024-07-23 04:13:39.859752] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:31.088 [2024-07-23 04:13:39.859795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:31.088 [2024-07-23 04:13:39.859809] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:31.088 [2024-07-23 04:13:39.859828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:31.088 [2024-07-23 04:13:39.859839] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:31.088 [2024-07-23 04:13:39.859854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:31.347 04:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:31.347 [2024-07-23 04:13:40.074894] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:31.347 BaseBdev1 00:18:31.347 04:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:31.347 04:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:31.347 04:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:31.347 04:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:31.347 04:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:31.347 04:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:31.347 04:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:31.608 04:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:31.868 [ 00:18:31.868 { 00:18:31.868 "name": "BaseBdev1", 00:18:31.868 "aliases": [ 00:18:31.868 "63280d42-40ec-4e7f-9c16-553ef986b94c" 00:18:31.868 ], 00:18:31.868 "product_name": "Malloc disk", 00:18:31.868 "block_size": 512, 00:18:31.868 "num_blocks": 65536, 00:18:31.868 "uuid": "63280d42-40ec-4e7f-9c16-553ef986b94c", 00:18:31.868 "assigned_rate_limits": { 00:18:31.868 "rw_ios_per_sec": 0, 00:18:31.868 "rw_mbytes_per_sec": 0, 00:18:31.868 "r_mbytes_per_sec": 0, 00:18:31.868 "w_mbytes_per_sec": 0 00:18:31.868 }, 00:18:31.868 "claimed": true, 00:18:31.868 "claim_type": "exclusive_write", 00:18:31.868 "zoned": false, 00:18:31.868 "supported_io_types": { 00:18:31.868 "read": true, 00:18:31.868 "write": true, 00:18:31.868 "unmap": true, 00:18:31.868 "flush": true, 00:18:31.868 "reset": true, 00:18:31.868 "nvme_admin": false, 00:18:31.868 "nvme_io": false, 00:18:31.868 "nvme_io_md": false, 00:18:31.868 "write_zeroes": true, 00:18:31.868 "zcopy": true, 00:18:31.868 "get_zone_info": false, 00:18:31.868 "zone_management": false, 00:18:31.868 "zone_append": false, 00:18:31.868 "compare": false, 00:18:31.868 "compare_and_write": false, 00:18:31.868 "abort": true, 00:18:31.868 "seek_hole": false, 00:18:31.868 "seek_data": false, 00:18:31.868 "copy": true, 00:18:31.868 "nvme_iov_md": false 00:18:31.868 }, 00:18:31.868 "memory_domains": [ 00:18:31.868 { 00:18:31.868 "dma_device_id": "system", 00:18:31.868 "dma_device_type": 1 00:18:31.868 }, 00:18:31.868 { 00:18:31.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.868 "dma_device_type": 2 00:18:31.868 } 00:18:31.868 ], 00:18:31.868 "driver_specific": {} 00:18:31.868 } 00:18:31.868 ] 00:18:31.868 04:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:31.868 04:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:31.868 04:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:31.868 04:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:31.868 04:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:31.868 04:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:31.868 04:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:31.868 04:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:31.868 04:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:31.868 04:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:31.868 04:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:31.868 04:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.868 04:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.127 04:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:32.127 "name": "Existed_Raid", 00:18:32.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.127 "strip_size_kb": 64, 00:18:32.127 "state": "configuring", 00:18:32.127 "raid_level": "concat", 00:18:32.127 "superblock": false, 00:18:32.127 "num_base_bdevs": 3, 00:18:32.127 "num_base_bdevs_discovered": 1, 00:18:32.127 "num_base_bdevs_operational": 3, 00:18:32.127 "base_bdevs_list": [ 00:18:32.127 { 00:18:32.127 "name": "BaseBdev1", 00:18:32.127 "uuid": "63280d42-40ec-4e7f-9c16-553ef986b94c", 00:18:32.127 "is_configured": true, 00:18:32.127 "data_offset": 0, 00:18:32.127 "data_size": 65536 00:18:32.127 }, 00:18:32.127 { 00:18:32.127 "name": "BaseBdev2", 00:18:32.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.127 "is_configured": false, 00:18:32.127 "data_offset": 0, 00:18:32.127 "data_size": 0 00:18:32.127 }, 00:18:32.127 { 00:18:32.127 "name": "BaseBdev3", 00:18:32.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.127 "is_configured": false, 00:18:32.127 "data_offset": 0, 00:18:32.127 "data_size": 0 00:18:32.127 } 00:18:32.127 ] 00:18:32.127 }' 00:18:32.127 04:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:32.127 04:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.694 04:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:32.954 [2024-07-23 04:13:41.478695] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:32.954 [2024-07-23 04:13:41.478750] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:18:32.954 04:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:32.954 [2024-07-23 04:13:41.707404] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:32.954 [2024-07-23 04:13:41.709696] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:32.954 [2024-07-23 04:13:41.709737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:32.954 [2024-07-23 04:13:41.709751] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:32.954 [2024-07-23 04:13:41.709768] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:32.954 04:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:32.954 04:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:32.954 04:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:32.954 04:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:32.954 04:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:32.954 04:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:32.954 04:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:32.954 04:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:32.954 04:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:32.954 04:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:32.954 04:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:32.954 04:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:32.954 04:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.954 04:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.213 04:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:33.213 "name": "Existed_Raid", 00:18:33.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.213 "strip_size_kb": 64, 00:18:33.213 "state": "configuring", 00:18:33.213 "raid_level": "concat", 00:18:33.213 "superblock": false, 00:18:33.213 "num_base_bdevs": 3, 00:18:33.213 "num_base_bdevs_discovered": 1, 00:18:33.213 "num_base_bdevs_operational": 3, 00:18:33.213 "base_bdevs_list": [ 00:18:33.213 { 00:18:33.213 "name": "BaseBdev1", 00:18:33.213 "uuid": "63280d42-40ec-4e7f-9c16-553ef986b94c", 00:18:33.213 "is_configured": true, 00:18:33.213 "data_offset": 0, 00:18:33.213 "data_size": 65536 00:18:33.213 }, 00:18:33.213 { 00:18:33.213 "name": "BaseBdev2", 00:18:33.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.213 "is_configured": false, 00:18:33.213 "data_offset": 0, 00:18:33.213 "data_size": 0 00:18:33.213 }, 00:18:33.213 { 00:18:33.213 "name": "BaseBdev3", 00:18:33.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.213 "is_configured": false, 00:18:33.213 "data_offset": 0, 00:18:33.213 "data_size": 0 00:18:33.213 } 00:18:33.213 ] 00:18:33.213 }' 00:18:33.213 04:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:33.213 04:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.780 04:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:34.039 [2024-07-23 04:13:42.797235] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:34.040 BaseBdev2 00:18:34.040 04:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:34.040 04:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:34.040 04:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:34.040 04:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:34.040 04:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:34.040 04:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:34.040 04:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:34.607 04:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:34.867 [ 00:18:34.867 { 00:18:34.867 "name": "BaseBdev2", 00:18:34.867 "aliases": [ 00:18:34.867 "cf507ed1-4cbd-4323-82de-6ea924b06b7c" 00:18:34.867 ], 00:18:34.867 "product_name": "Malloc disk", 00:18:34.867 "block_size": 512, 00:18:34.867 "num_blocks": 65536, 00:18:34.867 "uuid": "cf507ed1-4cbd-4323-82de-6ea924b06b7c", 00:18:34.867 "assigned_rate_limits": { 00:18:34.867 "rw_ios_per_sec": 0, 00:18:34.867 "rw_mbytes_per_sec": 0, 00:18:34.867 "r_mbytes_per_sec": 0, 00:18:34.867 "w_mbytes_per_sec": 0 00:18:34.867 }, 00:18:34.867 "claimed": true, 00:18:34.867 "claim_type": "exclusive_write", 00:18:34.867 "zoned": false, 00:18:34.867 "supported_io_types": { 00:18:34.867 "read": true, 00:18:34.867 "write": true, 00:18:34.867 "unmap": true, 00:18:34.867 "flush": true, 00:18:34.867 "reset": true, 00:18:34.867 "nvme_admin": false, 00:18:34.867 "nvme_io": false, 00:18:34.867 "nvme_io_md": false, 00:18:34.867 "write_zeroes": true, 00:18:34.867 "zcopy": true, 00:18:34.867 "get_zone_info": false, 00:18:34.867 "zone_management": false, 00:18:34.867 "zone_append": false, 00:18:34.867 "compare": false, 00:18:34.867 "compare_and_write": false, 00:18:34.867 "abort": true, 00:18:34.867 "seek_hole": false, 00:18:34.867 "seek_data": false, 00:18:34.867 "copy": true, 00:18:34.867 "nvme_iov_md": false 00:18:34.867 }, 00:18:34.867 "memory_domains": [ 00:18:34.867 { 00:18:34.867 "dma_device_id": "system", 00:18:34.867 "dma_device_type": 1 00:18:34.867 }, 00:18:34.867 { 00:18:34.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.867 "dma_device_type": 2 00:18:34.867 } 00:18:34.867 ], 00:18:34.867 "driver_specific": {} 00:18:34.867 } 00:18:34.867 ] 00:18:34.867 04:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:34.867 04:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:34.867 04:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:34.867 04:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:34.867 04:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:34.867 04:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:34.867 04:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:34.867 04:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:34.867 04:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:34.867 04:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:34.867 04:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:34.867 04:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:34.867 04:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:34.867 04:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.867 04:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.126 04:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:35.126 "name": "Existed_Raid", 00:18:35.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.126 "strip_size_kb": 64, 00:18:35.126 "state": "configuring", 00:18:35.126 "raid_level": "concat", 00:18:35.126 "superblock": false, 00:18:35.126 "num_base_bdevs": 3, 00:18:35.126 "num_base_bdevs_discovered": 2, 00:18:35.126 "num_base_bdevs_operational": 3, 00:18:35.126 "base_bdevs_list": [ 00:18:35.126 { 00:18:35.126 "name": "BaseBdev1", 00:18:35.126 "uuid": "63280d42-40ec-4e7f-9c16-553ef986b94c", 00:18:35.126 "is_configured": true, 00:18:35.126 "data_offset": 0, 00:18:35.126 "data_size": 65536 00:18:35.126 }, 00:18:35.126 { 00:18:35.126 "name": "BaseBdev2", 00:18:35.126 "uuid": "cf507ed1-4cbd-4323-82de-6ea924b06b7c", 00:18:35.126 "is_configured": true, 00:18:35.126 "data_offset": 0, 00:18:35.126 "data_size": 65536 00:18:35.126 }, 00:18:35.126 { 00:18:35.126 "name": "BaseBdev3", 00:18:35.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.126 "is_configured": false, 00:18:35.126 "data_offset": 0, 00:18:35.126 "data_size": 0 00:18:35.126 } 00:18:35.126 ] 00:18:35.126 }' 00:18:35.126 04:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:35.126 04:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.693 04:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:35.953 [2024-07-23 04:13:44.580921] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:35.953 [2024-07-23 04:13:44.580966] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:18:35.953 [2024-07-23 04:13:44.580983] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:35.953 [2024-07-23 04:13:44.581312] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:18:35.953 [2024-07-23 04:13:44.581537] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:18:35.953 [2024-07-23 04:13:44.581552] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:18:35.953 [2024-07-23 04:13:44.581873] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.953 BaseBdev3 00:18:35.953 04:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:35.953 04:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:35.953 04:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:35.953 04:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:35.953 04:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:35.953 04:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:35.953 04:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:36.212 04:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:36.471 [ 00:18:36.471 { 00:18:36.471 "name": "BaseBdev3", 00:18:36.471 "aliases": [ 00:18:36.471 "620f79db-ad5e-4d29-b725-7d6c774bbf72" 00:18:36.471 ], 00:18:36.471 "product_name": "Malloc disk", 00:18:36.471 "block_size": 512, 00:18:36.471 "num_blocks": 65536, 00:18:36.471 "uuid": "620f79db-ad5e-4d29-b725-7d6c774bbf72", 00:18:36.471 "assigned_rate_limits": { 00:18:36.471 "rw_ios_per_sec": 0, 00:18:36.471 "rw_mbytes_per_sec": 0, 00:18:36.471 "r_mbytes_per_sec": 0, 00:18:36.471 "w_mbytes_per_sec": 0 00:18:36.471 }, 00:18:36.471 "claimed": true, 00:18:36.471 "claim_type": "exclusive_write", 00:18:36.471 "zoned": false, 00:18:36.471 "supported_io_types": { 00:18:36.471 "read": true, 00:18:36.471 "write": true, 00:18:36.471 "unmap": true, 00:18:36.471 "flush": true, 00:18:36.471 "reset": true, 00:18:36.471 "nvme_admin": false, 00:18:36.471 "nvme_io": false, 00:18:36.471 "nvme_io_md": false, 00:18:36.471 "write_zeroes": true, 00:18:36.471 "zcopy": true, 00:18:36.471 "get_zone_info": false, 00:18:36.471 "zone_management": false, 00:18:36.471 "zone_append": false, 00:18:36.471 "compare": false, 00:18:36.471 "compare_and_write": false, 00:18:36.471 "abort": true, 00:18:36.471 "seek_hole": false, 00:18:36.471 "seek_data": false, 00:18:36.471 "copy": true, 00:18:36.471 "nvme_iov_md": false 00:18:36.471 }, 00:18:36.471 "memory_domains": [ 00:18:36.471 { 00:18:36.471 "dma_device_id": "system", 00:18:36.471 "dma_device_type": 1 00:18:36.471 }, 00:18:36.471 { 00:18:36.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.471 "dma_device_type": 2 00:18:36.471 } 00:18:36.471 ], 00:18:36.471 "driver_specific": {} 00:18:36.471 } 00:18:36.471 ] 00:18:36.471 04:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:36.471 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:36.471 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:36.471 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:36.471 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:36.471 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:36.471 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:36.471 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:36.472 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:36.472 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:36.472 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:36.472 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:36.472 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:36.472 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.472 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.472 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:36.472 "name": "Existed_Raid", 00:18:36.472 "uuid": "a12cba70-bd86-4fe8-944a-2d47e59169fe", 00:18:36.472 "strip_size_kb": 64, 00:18:36.472 "state": "online", 00:18:36.472 "raid_level": "concat", 00:18:36.472 "superblock": false, 00:18:36.472 "num_base_bdevs": 3, 00:18:36.472 "num_base_bdevs_discovered": 3, 00:18:36.472 "num_base_bdevs_operational": 3, 00:18:36.472 "base_bdevs_list": [ 00:18:36.472 { 00:18:36.472 "name": "BaseBdev1", 00:18:36.472 "uuid": "63280d42-40ec-4e7f-9c16-553ef986b94c", 00:18:36.472 "is_configured": true, 00:18:36.472 "data_offset": 0, 00:18:36.472 "data_size": 65536 00:18:36.472 }, 00:18:36.472 { 00:18:36.472 "name": "BaseBdev2", 00:18:36.472 "uuid": "cf507ed1-4cbd-4323-82de-6ea924b06b7c", 00:18:36.472 "is_configured": true, 00:18:36.472 "data_offset": 0, 00:18:36.472 "data_size": 65536 00:18:36.472 }, 00:18:36.472 { 00:18:36.472 "name": "BaseBdev3", 00:18:36.472 "uuid": "620f79db-ad5e-4d29-b725-7d6c774bbf72", 00:18:36.472 "is_configured": true, 00:18:36.472 "data_offset": 0, 00:18:36.472 "data_size": 65536 00:18:36.472 } 00:18:36.472 ] 00:18:36.472 }' 00:18:36.472 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:36.472 04:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.040 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:37.040 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:37.040 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:37.040 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:37.040 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:37.040 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:37.040 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:37.299 04:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:37.299 [2024-07-23 04:13:46.033276] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:37.299 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:37.299 "name": "Existed_Raid", 00:18:37.299 "aliases": [ 00:18:37.299 "a12cba70-bd86-4fe8-944a-2d47e59169fe" 00:18:37.299 ], 00:18:37.299 "product_name": "Raid Volume", 00:18:37.299 "block_size": 512, 00:18:37.299 "num_blocks": 196608, 00:18:37.299 "uuid": "a12cba70-bd86-4fe8-944a-2d47e59169fe", 00:18:37.299 "assigned_rate_limits": { 00:18:37.299 "rw_ios_per_sec": 0, 00:18:37.299 "rw_mbytes_per_sec": 0, 00:18:37.299 "r_mbytes_per_sec": 0, 00:18:37.299 "w_mbytes_per_sec": 0 00:18:37.299 }, 00:18:37.299 "claimed": false, 00:18:37.299 "zoned": false, 00:18:37.299 "supported_io_types": { 00:18:37.299 "read": true, 00:18:37.299 "write": true, 00:18:37.299 "unmap": true, 00:18:37.299 "flush": true, 00:18:37.299 "reset": true, 00:18:37.299 "nvme_admin": false, 00:18:37.299 "nvme_io": false, 00:18:37.299 "nvme_io_md": false, 00:18:37.299 "write_zeroes": true, 00:18:37.299 "zcopy": false, 00:18:37.299 "get_zone_info": false, 00:18:37.299 "zone_management": false, 00:18:37.299 "zone_append": false, 00:18:37.299 "compare": false, 00:18:37.299 "compare_and_write": false, 00:18:37.299 "abort": false, 00:18:37.299 "seek_hole": false, 00:18:37.299 "seek_data": false, 00:18:37.299 "copy": false, 00:18:37.299 "nvme_iov_md": false 00:18:37.299 }, 00:18:37.299 "memory_domains": [ 00:18:37.299 { 00:18:37.299 "dma_device_id": "system", 00:18:37.299 "dma_device_type": 1 00:18:37.299 }, 00:18:37.299 { 00:18:37.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.299 "dma_device_type": 2 00:18:37.299 }, 00:18:37.299 { 00:18:37.299 "dma_device_id": "system", 00:18:37.299 "dma_device_type": 1 00:18:37.299 }, 00:18:37.299 { 00:18:37.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.299 "dma_device_type": 2 00:18:37.299 }, 00:18:37.299 { 00:18:37.299 "dma_device_id": "system", 00:18:37.299 "dma_device_type": 1 00:18:37.299 }, 00:18:37.299 { 00:18:37.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.299 "dma_device_type": 2 00:18:37.299 } 00:18:37.299 ], 00:18:37.299 "driver_specific": { 00:18:37.299 "raid": { 00:18:37.299 "uuid": "a12cba70-bd86-4fe8-944a-2d47e59169fe", 00:18:37.299 "strip_size_kb": 64, 00:18:37.299 "state": "online", 00:18:37.299 "raid_level": "concat", 00:18:37.299 "superblock": false, 00:18:37.299 "num_base_bdevs": 3, 00:18:37.299 "num_base_bdevs_discovered": 3, 00:18:37.299 "num_base_bdevs_operational": 3, 00:18:37.299 "base_bdevs_list": [ 00:18:37.299 { 00:18:37.299 "name": "BaseBdev1", 00:18:37.299 "uuid": "63280d42-40ec-4e7f-9c16-553ef986b94c", 00:18:37.299 "is_configured": true, 00:18:37.299 "data_offset": 0, 00:18:37.299 "data_size": 65536 00:18:37.299 }, 00:18:37.299 { 00:18:37.299 "name": "BaseBdev2", 00:18:37.299 "uuid": "cf507ed1-4cbd-4323-82de-6ea924b06b7c", 00:18:37.299 "is_configured": true, 00:18:37.299 "data_offset": 0, 00:18:37.299 "data_size": 65536 00:18:37.299 }, 00:18:37.299 { 00:18:37.299 "name": "BaseBdev3", 00:18:37.299 "uuid": "620f79db-ad5e-4d29-b725-7d6c774bbf72", 00:18:37.299 "is_configured": true, 00:18:37.299 "data_offset": 0, 00:18:37.299 "data_size": 65536 00:18:37.299 } 00:18:37.299 ] 00:18:37.299 } 00:18:37.299 } 00:18:37.299 }' 00:18:37.299 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:37.559 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:37.559 BaseBdev2 00:18:37.559 BaseBdev3' 00:18:37.559 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:37.559 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:37.559 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:37.559 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:37.559 "name": "BaseBdev1", 00:18:37.559 "aliases": [ 00:18:37.559 "63280d42-40ec-4e7f-9c16-553ef986b94c" 00:18:37.559 ], 00:18:37.559 "product_name": "Malloc disk", 00:18:37.559 "block_size": 512, 00:18:37.559 "num_blocks": 65536, 00:18:37.559 "uuid": "63280d42-40ec-4e7f-9c16-553ef986b94c", 00:18:37.559 "assigned_rate_limits": { 00:18:37.559 "rw_ios_per_sec": 0, 00:18:37.559 "rw_mbytes_per_sec": 0, 00:18:37.559 "r_mbytes_per_sec": 0, 00:18:37.559 "w_mbytes_per_sec": 0 00:18:37.559 }, 00:18:37.559 "claimed": true, 00:18:37.559 "claim_type": "exclusive_write", 00:18:37.559 "zoned": false, 00:18:37.559 "supported_io_types": { 00:18:37.559 "read": true, 00:18:37.559 "write": true, 00:18:37.559 "unmap": true, 00:18:37.559 "flush": true, 00:18:37.559 "reset": true, 00:18:37.559 "nvme_admin": false, 00:18:37.559 "nvme_io": false, 00:18:37.559 "nvme_io_md": false, 00:18:37.559 "write_zeroes": true, 00:18:37.559 "zcopy": true, 00:18:37.559 "get_zone_info": false, 00:18:37.559 "zone_management": false, 00:18:37.559 "zone_append": false, 00:18:37.559 "compare": false, 00:18:37.559 "compare_and_write": false, 00:18:37.559 "abort": true, 00:18:37.559 "seek_hole": false, 00:18:37.559 "seek_data": false, 00:18:37.559 "copy": true, 00:18:37.559 "nvme_iov_md": false 00:18:37.559 }, 00:18:37.559 "memory_domains": [ 00:18:37.559 { 00:18:37.559 "dma_device_id": "system", 00:18:37.559 "dma_device_type": 1 00:18:37.559 }, 00:18:37.559 { 00:18:37.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.559 "dma_device_type": 2 00:18:37.559 } 00:18:37.559 ], 00:18:37.559 "driver_specific": {} 00:18:37.559 }' 00:18:37.559 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:37.818 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:37.818 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:37.818 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:37.818 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:37.818 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:37.818 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:37.818 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:37.818 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:37.818 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:38.077 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:38.077 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:38.077 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:38.077 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:38.077 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:38.336 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:38.336 "name": "BaseBdev2", 00:18:38.336 "aliases": [ 00:18:38.336 "cf507ed1-4cbd-4323-82de-6ea924b06b7c" 00:18:38.336 ], 00:18:38.336 "product_name": "Malloc disk", 00:18:38.336 "block_size": 512, 00:18:38.336 "num_blocks": 65536, 00:18:38.336 "uuid": "cf507ed1-4cbd-4323-82de-6ea924b06b7c", 00:18:38.336 "assigned_rate_limits": { 00:18:38.336 "rw_ios_per_sec": 0, 00:18:38.336 "rw_mbytes_per_sec": 0, 00:18:38.336 "r_mbytes_per_sec": 0, 00:18:38.336 "w_mbytes_per_sec": 0 00:18:38.336 }, 00:18:38.336 "claimed": true, 00:18:38.336 "claim_type": "exclusive_write", 00:18:38.336 "zoned": false, 00:18:38.336 "supported_io_types": { 00:18:38.336 "read": true, 00:18:38.336 "write": true, 00:18:38.336 "unmap": true, 00:18:38.336 "flush": true, 00:18:38.336 "reset": true, 00:18:38.336 "nvme_admin": false, 00:18:38.336 "nvme_io": false, 00:18:38.336 "nvme_io_md": false, 00:18:38.336 "write_zeroes": true, 00:18:38.336 "zcopy": true, 00:18:38.336 "get_zone_info": false, 00:18:38.336 "zone_management": false, 00:18:38.336 "zone_append": false, 00:18:38.336 "compare": false, 00:18:38.336 "compare_and_write": false, 00:18:38.336 "abort": true, 00:18:38.336 "seek_hole": false, 00:18:38.336 "seek_data": false, 00:18:38.336 "copy": true, 00:18:38.336 "nvme_iov_md": false 00:18:38.336 }, 00:18:38.336 "memory_domains": [ 00:18:38.336 { 00:18:38.336 "dma_device_id": "system", 00:18:38.336 "dma_device_type": 1 00:18:38.336 }, 00:18:38.336 { 00:18:38.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.336 "dma_device_type": 2 00:18:38.336 } 00:18:38.336 ], 00:18:38.336 "driver_specific": {} 00:18:38.336 }' 00:18:38.336 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:38.336 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:38.336 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:38.336 04:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:38.336 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:38.336 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:38.336 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:38.336 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:38.595 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:38.595 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:38.595 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:38.595 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:38.595 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:38.595 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:38.595 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:38.854 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:38.854 "name": "BaseBdev3", 00:18:38.854 "aliases": [ 00:18:38.854 "620f79db-ad5e-4d29-b725-7d6c774bbf72" 00:18:38.854 ], 00:18:38.854 "product_name": "Malloc disk", 00:18:38.854 "block_size": 512, 00:18:38.854 "num_blocks": 65536, 00:18:38.854 "uuid": "620f79db-ad5e-4d29-b725-7d6c774bbf72", 00:18:38.854 "assigned_rate_limits": { 00:18:38.854 "rw_ios_per_sec": 0, 00:18:38.854 "rw_mbytes_per_sec": 0, 00:18:38.854 "r_mbytes_per_sec": 0, 00:18:38.854 "w_mbytes_per_sec": 0 00:18:38.854 }, 00:18:38.854 "claimed": true, 00:18:38.854 "claim_type": "exclusive_write", 00:18:38.854 "zoned": false, 00:18:38.854 "supported_io_types": { 00:18:38.854 "read": true, 00:18:38.854 "write": true, 00:18:38.854 "unmap": true, 00:18:38.854 "flush": true, 00:18:38.854 "reset": true, 00:18:38.854 "nvme_admin": false, 00:18:38.854 "nvme_io": false, 00:18:38.854 "nvme_io_md": false, 00:18:38.854 "write_zeroes": true, 00:18:38.854 "zcopy": true, 00:18:38.854 "get_zone_info": false, 00:18:38.854 "zone_management": false, 00:18:38.854 "zone_append": false, 00:18:38.854 "compare": false, 00:18:38.854 "compare_and_write": false, 00:18:38.854 "abort": true, 00:18:38.854 "seek_hole": false, 00:18:38.854 "seek_data": false, 00:18:38.854 "copy": true, 00:18:38.854 "nvme_iov_md": false 00:18:38.854 }, 00:18:38.854 "memory_domains": [ 00:18:38.854 { 00:18:38.854 "dma_device_id": "system", 00:18:38.854 "dma_device_type": 1 00:18:38.854 }, 00:18:38.854 { 00:18:38.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.854 "dma_device_type": 2 00:18:38.854 } 00:18:38.854 ], 00:18:38.854 "driver_specific": {} 00:18:38.854 }' 00:18:38.854 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:38.854 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:38.854 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:38.854 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:38.854 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:38.854 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:38.854 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:39.113 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:39.113 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:39.113 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:39.113 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:39.113 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:39.113 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:39.373 [2024-07-23 04:13:47.930323] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:39.373 [2024-07-23 04:13:47.930360] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:39.373 [2024-07-23 04:13:47.930422] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:39.373 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:39.373 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:18:39.373 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:39.373 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:39.373 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:39.373 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:18:39.373 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:39.373 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:39.373 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:39.373 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:39.373 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:39.373 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:39.373 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:39.373 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:39.373 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:39.373 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.373 04:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.632 04:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:39.632 "name": "Existed_Raid", 00:18:39.632 "uuid": "a12cba70-bd86-4fe8-944a-2d47e59169fe", 00:18:39.632 "strip_size_kb": 64, 00:18:39.632 "state": "offline", 00:18:39.632 "raid_level": "concat", 00:18:39.632 "superblock": false, 00:18:39.632 "num_base_bdevs": 3, 00:18:39.632 "num_base_bdevs_discovered": 2, 00:18:39.632 "num_base_bdevs_operational": 2, 00:18:39.632 "base_bdevs_list": [ 00:18:39.632 { 00:18:39.632 "name": null, 00:18:39.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.632 "is_configured": false, 00:18:39.632 "data_offset": 0, 00:18:39.632 "data_size": 65536 00:18:39.632 }, 00:18:39.632 { 00:18:39.632 "name": "BaseBdev2", 00:18:39.632 "uuid": "cf507ed1-4cbd-4323-82de-6ea924b06b7c", 00:18:39.632 "is_configured": true, 00:18:39.632 "data_offset": 0, 00:18:39.632 "data_size": 65536 00:18:39.632 }, 00:18:39.632 { 00:18:39.632 "name": "BaseBdev3", 00:18:39.632 "uuid": "620f79db-ad5e-4d29-b725-7d6c774bbf72", 00:18:39.632 "is_configured": true, 00:18:39.632 "data_offset": 0, 00:18:39.632 "data_size": 65536 00:18:39.632 } 00:18:39.632 ] 00:18:39.632 }' 00:18:39.632 04:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:39.632 04:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.201 04:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:40.201 04:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:40.201 04:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.201 04:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:40.460 04:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:40.460 04:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:40.460 04:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:40.460 [2024-07-23 04:13:49.226389] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:40.719 04:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:40.719 04:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:40.719 04:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.719 04:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:40.981 04:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:40.981 04:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:40.981 04:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:41.240 [2024-07-23 04:13:49.817394] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:41.240 [2024-07-23 04:13:49.817453] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:18:41.240 04:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:41.240 04:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:41.240 04:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.240 04:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:41.499 04:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:41.499 04:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:41.499 04:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:18:41.499 04:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:41.499 04:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:41.499 04:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:41.758 BaseBdev2 00:18:41.758 04:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:41.758 04:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:41.758 04:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:41.758 04:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:41.758 04:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:41.758 04:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:41.758 04:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:42.017 04:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:42.276 [ 00:18:42.276 { 00:18:42.276 "name": "BaseBdev2", 00:18:42.276 "aliases": [ 00:18:42.276 "5b7d5b81-fd4c-480b-88e7-721f6776bdb1" 00:18:42.276 ], 00:18:42.276 "product_name": "Malloc disk", 00:18:42.276 "block_size": 512, 00:18:42.276 "num_blocks": 65536, 00:18:42.276 "uuid": "5b7d5b81-fd4c-480b-88e7-721f6776bdb1", 00:18:42.276 "assigned_rate_limits": { 00:18:42.276 "rw_ios_per_sec": 0, 00:18:42.276 "rw_mbytes_per_sec": 0, 00:18:42.276 "r_mbytes_per_sec": 0, 00:18:42.276 "w_mbytes_per_sec": 0 00:18:42.276 }, 00:18:42.276 "claimed": false, 00:18:42.276 "zoned": false, 00:18:42.276 "supported_io_types": { 00:18:42.276 "read": true, 00:18:42.276 "write": true, 00:18:42.276 "unmap": true, 00:18:42.276 "flush": true, 00:18:42.276 "reset": true, 00:18:42.276 "nvme_admin": false, 00:18:42.276 "nvme_io": false, 00:18:42.276 "nvme_io_md": false, 00:18:42.276 "write_zeroes": true, 00:18:42.276 "zcopy": true, 00:18:42.276 "get_zone_info": false, 00:18:42.276 "zone_management": false, 00:18:42.276 "zone_append": false, 00:18:42.276 "compare": false, 00:18:42.276 "compare_and_write": false, 00:18:42.276 "abort": true, 00:18:42.276 "seek_hole": false, 00:18:42.276 "seek_data": false, 00:18:42.276 "copy": true, 00:18:42.276 "nvme_iov_md": false 00:18:42.276 }, 00:18:42.276 "memory_domains": [ 00:18:42.276 { 00:18:42.276 "dma_device_id": "system", 00:18:42.276 "dma_device_type": 1 00:18:42.276 }, 00:18:42.276 { 00:18:42.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.276 "dma_device_type": 2 00:18:42.276 } 00:18:42.276 ], 00:18:42.276 "driver_specific": {} 00:18:42.276 } 00:18:42.276 ] 00:18:42.276 04:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:42.276 04:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:42.276 04:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:42.276 04:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:42.535 BaseBdev3 00:18:42.535 04:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:42.535 04:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:42.535 04:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:42.535 04:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:42.535 04:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:42.535 04:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:42.535 04:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:42.795 04:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:43.054 [ 00:18:43.054 { 00:18:43.054 "name": "BaseBdev3", 00:18:43.054 "aliases": [ 00:18:43.054 "17db0b0e-3cd4-43bb-9e78-81bd7b8da620" 00:18:43.054 ], 00:18:43.054 "product_name": "Malloc disk", 00:18:43.054 "block_size": 512, 00:18:43.054 "num_blocks": 65536, 00:18:43.054 "uuid": "17db0b0e-3cd4-43bb-9e78-81bd7b8da620", 00:18:43.054 "assigned_rate_limits": { 00:18:43.054 "rw_ios_per_sec": 0, 00:18:43.054 "rw_mbytes_per_sec": 0, 00:18:43.054 "r_mbytes_per_sec": 0, 00:18:43.054 "w_mbytes_per_sec": 0 00:18:43.054 }, 00:18:43.054 "claimed": false, 00:18:43.054 "zoned": false, 00:18:43.054 "supported_io_types": { 00:18:43.054 "read": true, 00:18:43.054 "write": true, 00:18:43.054 "unmap": true, 00:18:43.054 "flush": true, 00:18:43.054 "reset": true, 00:18:43.054 "nvme_admin": false, 00:18:43.054 "nvme_io": false, 00:18:43.054 "nvme_io_md": false, 00:18:43.054 "write_zeroes": true, 00:18:43.054 "zcopy": true, 00:18:43.054 "get_zone_info": false, 00:18:43.054 "zone_management": false, 00:18:43.054 "zone_append": false, 00:18:43.054 "compare": false, 00:18:43.055 "compare_and_write": false, 00:18:43.055 "abort": true, 00:18:43.055 "seek_hole": false, 00:18:43.055 "seek_data": false, 00:18:43.055 "copy": true, 00:18:43.055 "nvme_iov_md": false 00:18:43.055 }, 00:18:43.055 "memory_domains": [ 00:18:43.055 { 00:18:43.055 "dma_device_id": "system", 00:18:43.055 "dma_device_type": 1 00:18:43.055 }, 00:18:43.055 { 00:18:43.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.055 "dma_device_type": 2 00:18:43.055 } 00:18:43.055 ], 00:18:43.055 "driver_specific": {} 00:18:43.055 } 00:18:43.055 ] 00:18:43.055 04:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:43.055 04:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:43.055 04:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:43.055 04:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:43.055 [2024-07-23 04:13:51.817414] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:43.055 [2024-07-23 04:13:51.817461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:43.055 [2024-07-23 04:13:51.817493] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:43.055 [2024-07-23 04:13:51.819804] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:43.055 04:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:43.055 04:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:43.055 04:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:43.055 04:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:43.055 04:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:43.055 04:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:43.055 04:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:43.055 04:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:43.055 04:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:43.055 04:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:43.314 04:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.314 04:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.314 04:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:43.314 "name": "Existed_Raid", 00:18:43.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.314 "strip_size_kb": 64, 00:18:43.314 "state": "configuring", 00:18:43.314 "raid_level": "concat", 00:18:43.314 "superblock": false, 00:18:43.314 "num_base_bdevs": 3, 00:18:43.314 "num_base_bdevs_discovered": 2, 00:18:43.314 "num_base_bdevs_operational": 3, 00:18:43.314 "base_bdevs_list": [ 00:18:43.314 { 00:18:43.314 "name": "BaseBdev1", 00:18:43.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.314 "is_configured": false, 00:18:43.314 "data_offset": 0, 00:18:43.314 "data_size": 0 00:18:43.314 }, 00:18:43.314 { 00:18:43.314 "name": "BaseBdev2", 00:18:43.314 "uuid": "5b7d5b81-fd4c-480b-88e7-721f6776bdb1", 00:18:43.314 "is_configured": true, 00:18:43.314 "data_offset": 0, 00:18:43.314 "data_size": 65536 00:18:43.314 }, 00:18:43.314 { 00:18:43.314 "name": "BaseBdev3", 00:18:43.314 "uuid": "17db0b0e-3cd4-43bb-9e78-81bd7b8da620", 00:18:43.314 "is_configured": true, 00:18:43.314 "data_offset": 0, 00:18:43.314 "data_size": 65536 00:18:43.314 } 00:18:43.314 ] 00:18:43.314 }' 00:18:43.314 04:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:43.314 04:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.883 04:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:44.142 [2024-07-23 04:13:52.723823] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:44.142 04:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:44.142 04:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:44.142 04:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:44.142 04:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:44.142 04:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:44.142 04:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:44.142 04:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:44.142 04:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:44.142 04:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:44.142 04:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:44.142 04:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.142 04:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.402 04:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:44.402 "name": "Existed_Raid", 00:18:44.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.402 "strip_size_kb": 64, 00:18:44.402 "state": "configuring", 00:18:44.402 "raid_level": "concat", 00:18:44.402 "superblock": false, 00:18:44.402 "num_base_bdevs": 3, 00:18:44.402 "num_base_bdevs_discovered": 1, 00:18:44.402 "num_base_bdevs_operational": 3, 00:18:44.402 "base_bdevs_list": [ 00:18:44.402 { 00:18:44.402 "name": "BaseBdev1", 00:18:44.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.402 "is_configured": false, 00:18:44.402 "data_offset": 0, 00:18:44.402 "data_size": 0 00:18:44.402 }, 00:18:44.402 { 00:18:44.402 "name": null, 00:18:44.402 "uuid": "5b7d5b81-fd4c-480b-88e7-721f6776bdb1", 00:18:44.402 "is_configured": false, 00:18:44.402 "data_offset": 0, 00:18:44.402 "data_size": 65536 00:18:44.402 }, 00:18:44.402 { 00:18:44.402 "name": "BaseBdev3", 00:18:44.402 "uuid": "17db0b0e-3cd4-43bb-9e78-81bd7b8da620", 00:18:44.402 "is_configured": true, 00:18:44.402 "data_offset": 0, 00:18:44.402 "data_size": 65536 00:18:44.402 } 00:18:44.402 ] 00:18:44.402 }' 00:18:44.402 04:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:44.402 04:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.969 04:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.969 04:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:45.227 04:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:45.227 04:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:45.485 [2024-07-23 04:13:54.040644] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:45.485 BaseBdev1 00:18:45.485 04:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:45.485 04:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:45.485 04:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:45.485 04:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:45.485 04:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:45.485 04:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:45.485 04:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:45.745 04:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:45.745 [ 00:18:45.745 { 00:18:45.745 "name": "BaseBdev1", 00:18:45.745 "aliases": [ 00:18:45.745 "5179e391-6b06-42ac-88d7-096253723958" 00:18:45.745 ], 00:18:45.745 "product_name": "Malloc disk", 00:18:45.745 "block_size": 512, 00:18:45.745 "num_blocks": 65536, 00:18:45.745 "uuid": "5179e391-6b06-42ac-88d7-096253723958", 00:18:45.745 "assigned_rate_limits": { 00:18:45.745 "rw_ios_per_sec": 0, 00:18:45.745 "rw_mbytes_per_sec": 0, 00:18:45.745 "r_mbytes_per_sec": 0, 00:18:45.745 "w_mbytes_per_sec": 0 00:18:45.745 }, 00:18:45.745 "claimed": true, 00:18:45.745 "claim_type": "exclusive_write", 00:18:45.745 "zoned": false, 00:18:45.745 "supported_io_types": { 00:18:45.745 "read": true, 00:18:45.745 "write": true, 00:18:45.745 "unmap": true, 00:18:45.745 "flush": true, 00:18:45.745 "reset": true, 00:18:45.745 "nvme_admin": false, 00:18:45.745 "nvme_io": false, 00:18:45.745 "nvme_io_md": false, 00:18:45.745 "write_zeroes": true, 00:18:45.745 "zcopy": true, 00:18:45.745 "get_zone_info": false, 00:18:45.745 "zone_management": false, 00:18:45.745 "zone_append": false, 00:18:45.745 "compare": false, 00:18:45.745 "compare_and_write": false, 00:18:45.745 "abort": true, 00:18:45.745 "seek_hole": false, 00:18:45.745 "seek_data": false, 00:18:45.745 "copy": true, 00:18:45.745 "nvme_iov_md": false 00:18:45.745 }, 00:18:45.745 "memory_domains": [ 00:18:45.745 { 00:18:45.745 "dma_device_id": "system", 00:18:45.745 "dma_device_type": 1 00:18:45.745 }, 00:18:45.745 { 00:18:45.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.745 "dma_device_type": 2 00:18:45.745 } 00:18:45.745 ], 00:18:45.745 "driver_specific": {} 00:18:45.745 } 00:18:45.745 ] 00:18:45.745 04:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:45.745 04:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:45.745 04:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:45.745 04:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:45.745 04:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:45.745 04:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:45.745 04:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:45.745 04:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:45.745 04:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:45.745 04:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:45.745 04:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:45.745 04:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.745 04:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.004 04:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:46.004 "name": "Existed_Raid", 00:18:46.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.004 "strip_size_kb": 64, 00:18:46.004 "state": "configuring", 00:18:46.004 "raid_level": "concat", 00:18:46.004 "superblock": false, 00:18:46.004 "num_base_bdevs": 3, 00:18:46.004 "num_base_bdevs_discovered": 2, 00:18:46.004 "num_base_bdevs_operational": 3, 00:18:46.004 "base_bdevs_list": [ 00:18:46.004 { 00:18:46.004 "name": "BaseBdev1", 00:18:46.004 "uuid": "5179e391-6b06-42ac-88d7-096253723958", 00:18:46.004 "is_configured": true, 00:18:46.004 "data_offset": 0, 00:18:46.004 "data_size": 65536 00:18:46.004 }, 00:18:46.004 { 00:18:46.004 "name": null, 00:18:46.004 "uuid": "5b7d5b81-fd4c-480b-88e7-721f6776bdb1", 00:18:46.004 "is_configured": false, 00:18:46.004 "data_offset": 0, 00:18:46.004 "data_size": 65536 00:18:46.004 }, 00:18:46.004 { 00:18:46.004 "name": "BaseBdev3", 00:18:46.004 "uuid": "17db0b0e-3cd4-43bb-9e78-81bd7b8da620", 00:18:46.004 "is_configured": true, 00:18:46.004 "data_offset": 0, 00:18:46.004 "data_size": 65536 00:18:46.004 } 00:18:46.004 ] 00:18:46.004 }' 00:18:46.004 04:13:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:46.004 04:13:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.571 04:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.571 04:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:46.830 04:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:46.830 04:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:47.089 [2024-07-23 04:13:55.761405] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:47.089 04:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:47.089 04:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:47.089 04:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:47.089 04:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:47.089 04:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:47.089 04:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:47.089 04:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:47.089 04:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:47.089 04:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:47.089 04:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:47.089 04:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.089 04:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.348 04:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:47.348 "name": "Existed_Raid", 00:18:47.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.348 "strip_size_kb": 64, 00:18:47.348 "state": "configuring", 00:18:47.348 "raid_level": "concat", 00:18:47.348 "superblock": false, 00:18:47.348 "num_base_bdevs": 3, 00:18:47.348 "num_base_bdevs_discovered": 1, 00:18:47.348 "num_base_bdevs_operational": 3, 00:18:47.348 "base_bdevs_list": [ 00:18:47.348 { 00:18:47.348 "name": "BaseBdev1", 00:18:47.348 "uuid": "5179e391-6b06-42ac-88d7-096253723958", 00:18:47.348 "is_configured": true, 00:18:47.348 "data_offset": 0, 00:18:47.348 "data_size": 65536 00:18:47.348 }, 00:18:47.348 { 00:18:47.348 "name": null, 00:18:47.348 "uuid": "5b7d5b81-fd4c-480b-88e7-721f6776bdb1", 00:18:47.348 "is_configured": false, 00:18:47.348 "data_offset": 0, 00:18:47.348 "data_size": 65536 00:18:47.348 }, 00:18:47.348 { 00:18:47.348 "name": null, 00:18:47.348 "uuid": "17db0b0e-3cd4-43bb-9e78-81bd7b8da620", 00:18:47.348 "is_configured": false, 00:18:47.348 "data_offset": 0, 00:18:47.348 "data_size": 65536 00:18:47.348 } 00:18:47.348 ] 00:18:47.348 }' 00:18:47.348 04:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:47.348 04:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.915 04:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.915 04:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:48.174 04:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:48.174 04:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:48.174 [2024-07-23 04:13:56.952644] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:48.433 04:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:48.433 04:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:48.433 04:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:48.433 04:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:48.433 04:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:48.433 04:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:48.433 04:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:48.433 04:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:48.433 04:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:48.433 04:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:48.433 04:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.433 04:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.433 04:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:48.433 "name": "Existed_Raid", 00:18:48.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.433 "strip_size_kb": 64, 00:18:48.433 "state": "configuring", 00:18:48.433 "raid_level": "concat", 00:18:48.433 "superblock": false, 00:18:48.433 "num_base_bdevs": 3, 00:18:48.433 "num_base_bdevs_discovered": 2, 00:18:48.433 "num_base_bdevs_operational": 3, 00:18:48.433 "base_bdevs_list": [ 00:18:48.433 { 00:18:48.433 "name": "BaseBdev1", 00:18:48.433 "uuid": "5179e391-6b06-42ac-88d7-096253723958", 00:18:48.433 "is_configured": true, 00:18:48.433 "data_offset": 0, 00:18:48.433 "data_size": 65536 00:18:48.433 }, 00:18:48.433 { 00:18:48.433 "name": null, 00:18:48.433 "uuid": "5b7d5b81-fd4c-480b-88e7-721f6776bdb1", 00:18:48.433 "is_configured": false, 00:18:48.433 "data_offset": 0, 00:18:48.433 "data_size": 65536 00:18:48.433 }, 00:18:48.433 { 00:18:48.433 "name": "BaseBdev3", 00:18:48.433 "uuid": "17db0b0e-3cd4-43bb-9e78-81bd7b8da620", 00:18:48.433 "is_configured": true, 00:18:48.433 "data_offset": 0, 00:18:48.433 "data_size": 65536 00:18:48.433 } 00:18:48.433 ] 00:18:48.433 }' 00:18:48.433 04:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:48.433 04:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.368 04:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.369 04:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:49.369 04:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:49.369 04:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:49.627 [2024-07-23 04:13:58.216081] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:49.627 04:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:49.627 04:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:49.627 04:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:49.627 04:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:49.627 04:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:49.627 04:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:49.627 04:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:49.627 04:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:49.627 04:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:49.627 04:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:49.627 04:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.627 04:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.886 04:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:49.886 "name": "Existed_Raid", 00:18:49.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.886 "strip_size_kb": 64, 00:18:49.886 "state": "configuring", 00:18:49.886 "raid_level": "concat", 00:18:49.886 "superblock": false, 00:18:49.886 "num_base_bdevs": 3, 00:18:49.886 "num_base_bdevs_discovered": 1, 00:18:49.886 "num_base_bdevs_operational": 3, 00:18:49.886 "base_bdevs_list": [ 00:18:49.886 { 00:18:49.886 "name": null, 00:18:49.886 "uuid": "5179e391-6b06-42ac-88d7-096253723958", 00:18:49.886 "is_configured": false, 00:18:49.886 "data_offset": 0, 00:18:49.886 "data_size": 65536 00:18:49.886 }, 00:18:49.886 { 00:18:49.886 "name": null, 00:18:49.886 "uuid": "5b7d5b81-fd4c-480b-88e7-721f6776bdb1", 00:18:49.886 "is_configured": false, 00:18:49.886 "data_offset": 0, 00:18:49.886 "data_size": 65536 00:18:49.886 }, 00:18:49.886 { 00:18:49.886 "name": "BaseBdev3", 00:18:49.886 "uuid": "17db0b0e-3cd4-43bb-9e78-81bd7b8da620", 00:18:49.886 "is_configured": true, 00:18:49.886 "data_offset": 0, 00:18:49.886 "data_size": 65536 00:18:49.886 } 00:18:49.886 ] 00:18:49.886 }' 00:18:49.886 04:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:49.886 04:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.453 04:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.453 04:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:50.712 04:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:50.712 04:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:50.971 [2024-07-23 04:13:59.590796] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:50.971 04:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:50.971 04:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:50.971 04:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:50.971 04:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:50.971 04:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:50.971 04:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:50.971 04:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:50.971 04:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:50.971 04:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:50.971 04:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:50.972 04:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.972 04:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.231 04:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:51.231 "name": "Existed_Raid", 00:18:51.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.231 "strip_size_kb": 64, 00:18:51.231 "state": "configuring", 00:18:51.231 "raid_level": "concat", 00:18:51.231 "superblock": false, 00:18:51.231 "num_base_bdevs": 3, 00:18:51.231 "num_base_bdevs_discovered": 2, 00:18:51.231 "num_base_bdevs_operational": 3, 00:18:51.231 "base_bdevs_list": [ 00:18:51.231 { 00:18:51.231 "name": null, 00:18:51.231 "uuid": "5179e391-6b06-42ac-88d7-096253723958", 00:18:51.231 "is_configured": false, 00:18:51.231 "data_offset": 0, 00:18:51.231 "data_size": 65536 00:18:51.231 }, 00:18:51.231 { 00:18:51.231 "name": "BaseBdev2", 00:18:51.231 "uuid": "5b7d5b81-fd4c-480b-88e7-721f6776bdb1", 00:18:51.231 "is_configured": true, 00:18:51.231 "data_offset": 0, 00:18:51.231 "data_size": 65536 00:18:51.231 }, 00:18:51.231 { 00:18:51.231 "name": "BaseBdev3", 00:18:51.231 "uuid": "17db0b0e-3cd4-43bb-9e78-81bd7b8da620", 00:18:51.231 "is_configured": true, 00:18:51.231 "data_offset": 0, 00:18:51.231 "data_size": 65536 00:18:51.231 } 00:18:51.231 ] 00:18:51.231 }' 00:18:51.231 04:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:51.231 04:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.799 04:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.799 04:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:52.059 04:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:52.059 04:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.059 04:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:52.318 04:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 5179e391-6b06-42ac-88d7-096253723958 00:18:52.577 [2024-07-23 04:14:01.150810] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:52.577 [2024-07-23 04:14:01.150855] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041780 00:18:52.577 [2024-07-23 04:14:01.150870] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:52.577 [2024-07-23 04:14:01.151178] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:18:52.577 [2024-07-23 04:14:01.151398] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041780 00:18:52.577 [2024-07-23 04:14:01.151412] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000041780 00:18:52.577 [2024-07-23 04:14:01.151741] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.577 NewBaseBdev 00:18:52.577 04:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:52.577 04:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:18:52.577 04:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:52.577 04:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:52.577 04:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:52.577 04:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:52.577 04:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:52.837 04:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:52.837 [ 00:18:52.837 { 00:18:52.837 "name": "NewBaseBdev", 00:18:52.837 "aliases": [ 00:18:52.837 "5179e391-6b06-42ac-88d7-096253723958" 00:18:52.837 ], 00:18:52.837 "product_name": "Malloc disk", 00:18:52.837 "block_size": 512, 00:18:52.837 "num_blocks": 65536, 00:18:52.837 "uuid": "5179e391-6b06-42ac-88d7-096253723958", 00:18:52.837 "assigned_rate_limits": { 00:18:52.837 "rw_ios_per_sec": 0, 00:18:52.837 "rw_mbytes_per_sec": 0, 00:18:52.837 "r_mbytes_per_sec": 0, 00:18:52.837 "w_mbytes_per_sec": 0 00:18:52.837 }, 00:18:52.837 "claimed": true, 00:18:52.837 "claim_type": "exclusive_write", 00:18:52.837 "zoned": false, 00:18:52.837 "supported_io_types": { 00:18:52.837 "read": true, 00:18:52.837 "write": true, 00:18:52.837 "unmap": true, 00:18:52.837 "flush": true, 00:18:52.837 "reset": true, 00:18:52.837 "nvme_admin": false, 00:18:52.837 "nvme_io": false, 00:18:52.837 "nvme_io_md": false, 00:18:52.837 "write_zeroes": true, 00:18:52.837 "zcopy": true, 00:18:52.837 "get_zone_info": false, 00:18:52.837 "zone_management": false, 00:18:52.837 "zone_append": false, 00:18:52.837 "compare": false, 00:18:52.837 "compare_and_write": false, 00:18:52.837 "abort": true, 00:18:52.837 "seek_hole": false, 00:18:52.837 "seek_data": false, 00:18:52.837 "copy": true, 00:18:52.837 "nvme_iov_md": false 00:18:52.837 }, 00:18:52.837 "memory_domains": [ 00:18:52.837 { 00:18:52.837 "dma_device_id": "system", 00:18:52.837 "dma_device_type": 1 00:18:52.837 }, 00:18:52.837 { 00:18:52.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.837 "dma_device_type": 2 00:18:52.837 } 00:18:52.837 ], 00:18:52.837 "driver_specific": {} 00:18:52.837 } 00:18:52.837 ] 00:18:52.837 04:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:52.837 04:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:52.837 04:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:52.837 04:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:52.837 04:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:52.837 04:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:52.837 04:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:52.837 04:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:52.837 04:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:52.837 04:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:52.837 04:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:52.837 04:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.837 04:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.096 04:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:53.096 "name": "Existed_Raid", 00:18:53.096 "uuid": "678f49ec-1a7c-48e0-8bf2-49bc627a34a0", 00:18:53.096 "strip_size_kb": 64, 00:18:53.096 "state": "online", 00:18:53.096 "raid_level": "concat", 00:18:53.096 "superblock": false, 00:18:53.096 "num_base_bdevs": 3, 00:18:53.096 "num_base_bdevs_discovered": 3, 00:18:53.096 "num_base_bdevs_operational": 3, 00:18:53.096 "base_bdevs_list": [ 00:18:53.096 { 00:18:53.096 "name": "NewBaseBdev", 00:18:53.096 "uuid": "5179e391-6b06-42ac-88d7-096253723958", 00:18:53.096 "is_configured": true, 00:18:53.096 "data_offset": 0, 00:18:53.096 "data_size": 65536 00:18:53.096 }, 00:18:53.096 { 00:18:53.096 "name": "BaseBdev2", 00:18:53.096 "uuid": "5b7d5b81-fd4c-480b-88e7-721f6776bdb1", 00:18:53.096 "is_configured": true, 00:18:53.096 "data_offset": 0, 00:18:53.096 "data_size": 65536 00:18:53.096 }, 00:18:53.096 { 00:18:53.096 "name": "BaseBdev3", 00:18:53.096 "uuid": "17db0b0e-3cd4-43bb-9e78-81bd7b8da620", 00:18:53.096 "is_configured": true, 00:18:53.096 "data_offset": 0, 00:18:53.096 "data_size": 65536 00:18:53.096 } 00:18:53.096 ] 00:18:53.096 }' 00:18:53.096 04:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:53.096 04:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.733 04:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:53.733 04:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:53.733 04:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:53.733 04:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:53.733 04:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:53.733 04:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:53.733 04:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:53.733 04:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:53.992 [2024-07-23 04:14:02.591148] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:53.992 04:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:53.992 "name": "Existed_Raid", 00:18:53.992 "aliases": [ 00:18:53.992 "678f49ec-1a7c-48e0-8bf2-49bc627a34a0" 00:18:53.992 ], 00:18:53.992 "product_name": "Raid Volume", 00:18:53.992 "block_size": 512, 00:18:53.992 "num_blocks": 196608, 00:18:53.992 "uuid": "678f49ec-1a7c-48e0-8bf2-49bc627a34a0", 00:18:53.992 "assigned_rate_limits": { 00:18:53.992 "rw_ios_per_sec": 0, 00:18:53.992 "rw_mbytes_per_sec": 0, 00:18:53.992 "r_mbytes_per_sec": 0, 00:18:53.992 "w_mbytes_per_sec": 0 00:18:53.992 }, 00:18:53.992 "claimed": false, 00:18:53.992 "zoned": false, 00:18:53.992 "supported_io_types": { 00:18:53.992 "read": true, 00:18:53.992 "write": true, 00:18:53.992 "unmap": true, 00:18:53.992 "flush": true, 00:18:53.992 "reset": true, 00:18:53.992 "nvme_admin": false, 00:18:53.992 "nvme_io": false, 00:18:53.992 "nvme_io_md": false, 00:18:53.992 "write_zeroes": true, 00:18:53.992 "zcopy": false, 00:18:53.992 "get_zone_info": false, 00:18:53.992 "zone_management": false, 00:18:53.992 "zone_append": false, 00:18:53.992 "compare": false, 00:18:53.992 "compare_and_write": false, 00:18:53.992 "abort": false, 00:18:53.992 "seek_hole": false, 00:18:53.992 "seek_data": false, 00:18:53.992 "copy": false, 00:18:53.992 "nvme_iov_md": false 00:18:53.992 }, 00:18:53.992 "memory_domains": [ 00:18:53.992 { 00:18:53.992 "dma_device_id": "system", 00:18:53.992 "dma_device_type": 1 00:18:53.992 }, 00:18:53.992 { 00:18:53.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.992 "dma_device_type": 2 00:18:53.992 }, 00:18:53.992 { 00:18:53.992 "dma_device_id": "system", 00:18:53.992 "dma_device_type": 1 00:18:53.992 }, 00:18:53.992 { 00:18:53.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.992 "dma_device_type": 2 00:18:53.992 }, 00:18:53.992 { 00:18:53.992 "dma_device_id": "system", 00:18:53.992 "dma_device_type": 1 00:18:53.992 }, 00:18:53.992 { 00:18:53.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.992 "dma_device_type": 2 00:18:53.992 } 00:18:53.992 ], 00:18:53.992 "driver_specific": { 00:18:53.992 "raid": { 00:18:53.992 "uuid": "678f49ec-1a7c-48e0-8bf2-49bc627a34a0", 00:18:53.992 "strip_size_kb": 64, 00:18:53.992 "state": "online", 00:18:53.992 "raid_level": "concat", 00:18:53.992 "superblock": false, 00:18:53.992 "num_base_bdevs": 3, 00:18:53.992 "num_base_bdevs_discovered": 3, 00:18:53.992 "num_base_bdevs_operational": 3, 00:18:53.992 "base_bdevs_list": [ 00:18:53.992 { 00:18:53.992 "name": "NewBaseBdev", 00:18:53.992 "uuid": "5179e391-6b06-42ac-88d7-096253723958", 00:18:53.992 "is_configured": true, 00:18:53.992 "data_offset": 0, 00:18:53.992 "data_size": 65536 00:18:53.992 }, 00:18:53.992 { 00:18:53.992 "name": "BaseBdev2", 00:18:53.992 "uuid": "5b7d5b81-fd4c-480b-88e7-721f6776bdb1", 00:18:53.992 "is_configured": true, 00:18:53.992 "data_offset": 0, 00:18:53.992 "data_size": 65536 00:18:53.992 }, 00:18:53.992 { 00:18:53.992 "name": "BaseBdev3", 00:18:53.992 "uuid": "17db0b0e-3cd4-43bb-9e78-81bd7b8da620", 00:18:53.992 "is_configured": true, 00:18:53.992 "data_offset": 0, 00:18:53.992 "data_size": 65536 00:18:53.992 } 00:18:53.992 ] 00:18:53.992 } 00:18:53.992 } 00:18:53.992 }' 00:18:53.992 04:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:53.992 04:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:53.992 BaseBdev2 00:18:53.992 BaseBdev3' 00:18:53.992 04:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:53.992 04:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:53.992 04:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:54.251 04:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:54.251 "name": "NewBaseBdev", 00:18:54.251 "aliases": [ 00:18:54.251 "5179e391-6b06-42ac-88d7-096253723958" 00:18:54.251 ], 00:18:54.251 "product_name": "Malloc disk", 00:18:54.251 "block_size": 512, 00:18:54.251 "num_blocks": 65536, 00:18:54.251 "uuid": "5179e391-6b06-42ac-88d7-096253723958", 00:18:54.251 "assigned_rate_limits": { 00:18:54.251 "rw_ios_per_sec": 0, 00:18:54.251 "rw_mbytes_per_sec": 0, 00:18:54.251 "r_mbytes_per_sec": 0, 00:18:54.251 "w_mbytes_per_sec": 0 00:18:54.251 }, 00:18:54.251 "claimed": true, 00:18:54.251 "claim_type": "exclusive_write", 00:18:54.251 "zoned": false, 00:18:54.251 "supported_io_types": { 00:18:54.251 "read": true, 00:18:54.251 "write": true, 00:18:54.251 "unmap": true, 00:18:54.251 "flush": true, 00:18:54.251 "reset": true, 00:18:54.251 "nvme_admin": false, 00:18:54.251 "nvme_io": false, 00:18:54.251 "nvme_io_md": false, 00:18:54.251 "write_zeroes": true, 00:18:54.251 "zcopy": true, 00:18:54.251 "get_zone_info": false, 00:18:54.251 "zone_management": false, 00:18:54.251 "zone_append": false, 00:18:54.251 "compare": false, 00:18:54.251 "compare_and_write": false, 00:18:54.251 "abort": true, 00:18:54.251 "seek_hole": false, 00:18:54.251 "seek_data": false, 00:18:54.251 "copy": true, 00:18:54.251 "nvme_iov_md": false 00:18:54.251 }, 00:18:54.251 "memory_domains": [ 00:18:54.251 { 00:18:54.251 "dma_device_id": "system", 00:18:54.251 "dma_device_type": 1 00:18:54.251 }, 00:18:54.251 { 00:18:54.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.251 "dma_device_type": 2 00:18:54.251 } 00:18:54.251 ], 00:18:54.251 "driver_specific": {} 00:18:54.251 }' 00:18:54.251 04:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:54.251 04:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:54.251 04:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:54.251 04:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:54.251 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:54.510 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:54.510 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:54.510 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:54.510 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:54.510 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:54.510 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:54.510 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:54.510 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:54.510 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:54.510 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:54.768 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:54.768 "name": "BaseBdev2", 00:18:54.768 "aliases": [ 00:18:54.768 "5b7d5b81-fd4c-480b-88e7-721f6776bdb1" 00:18:54.768 ], 00:18:54.768 "product_name": "Malloc disk", 00:18:54.768 "block_size": 512, 00:18:54.768 "num_blocks": 65536, 00:18:54.768 "uuid": "5b7d5b81-fd4c-480b-88e7-721f6776bdb1", 00:18:54.768 "assigned_rate_limits": { 00:18:54.768 "rw_ios_per_sec": 0, 00:18:54.768 "rw_mbytes_per_sec": 0, 00:18:54.768 "r_mbytes_per_sec": 0, 00:18:54.768 "w_mbytes_per_sec": 0 00:18:54.768 }, 00:18:54.768 "claimed": true, 00:18:54.768 "claim_type": "exclusive_write", 00:18:54.768 "zoned": false, 00:18:54.768 "supported_io_types": { 00:18:54.768 "read": true, 00:18:54.768 "write": true, 00:18:54.768 "unmap": true, 00:18:54.768 "flush": true, 00:18:54.768 "reset": true, 00:18:54.768 "nvme_admin": false, 00:18:54.768 "nvme_io": false, 00:18:54.768 "nvme_io_md": false, 00:18:54.768 "write_zeroes": true, 00:18:54.768 "zcopy": true, 00:18:54.768 "get_zone_info": false, 00:18:54.769 "zone_management": false, 00:18:54.769 "zone_append": false, 00:18:54.769 "compare": false, 00:18:54.769 "compare_and_write": false, 00:18:54.769 "abort": true, 00:18:54.769 "seek_hole": false, 00:18:54.769 "seek_data": false, 00:18:54.769 "copy": true, 00:18:54.769 "nvme_iov_md": false 00:18:54.769 }, 00:18:54.769 "memory_domains": [ 00:18:54.769 { 00:18:54.769 "dma_device_id": "system", 00:18:54.769 "dma_device_type": 1 00:18:54.769 }, 00:18:54.769 { 00:18:54.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.769 "dma_device_type": 2 00:18:54.769 } 00:18:54.769 ], 00:18:54.769 "driver_specific": {} 00:18:54.769 }' 00:18:54.769 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:54.769 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:55.027 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:55.027 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:55.027 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:55.027 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:55.027 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:55.027 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:55.027 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:55.027 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:55.027 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:55.027 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:55.027 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:55.027 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:55.027 04:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:55.286 04:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:55.286 "name": "BaseBdev3", 00:18:55.286 "aliases": [ 00:18:55.286 "17db0b0e-3cd4-43bb-9e78-81bd7b8da620" 00:18:55.286 ], 00:18:55.286 "product_name": "Malloc disk", 00:18:55.286 "block_size": 512, 00:18:55.286 "num_blocks": 65536, 00:18:55.286 "uuid": "17db0b0e-3cd4-43bb-9e78-81bd7b8da620", 00:18:55.286 "assigned_rate_limits": { 00:18:55.286 "rw_ios_per_sec": 0, 00:18:55.286 "rw_mbytes_per_sec": 0, 00:18:55.286 "r_mbytes_per_sec": 0, 00:18:55.286 "w_mbytes_per_sec": 0 00:18:55.286 }, 00:18:55.286 "claimed": true, 00:18:55.286 "claim_type": "exclusive_write", 00:18:55.286 "zoned": false, 00:18:55.286 "supported_io_types": { 00:18:55.286 "read": true, 00:18:55.286 "write": true, 00:18:55.286 "unmap": true, 00:18:55.286 "flush": true, 00:18:55.286 "reset": true, 00:18:55.286 "nvme_admin": false, 00:18:55.286 "nvme_io": false, 00:18:55.286 "nvme_io_md": false, 00:18:55.286 "write_zeroes": true, 00:18:55.286 "zcopy": true, 00:18:55.286 "get_zone_info": false, 00:18:55.286 "zone_management": false, 00:18:55.286 "zone_append": false, 00:18:55.286 "compare": false, 00:18:55.286 "compare_and_write": false, 00:18:55.286 "abort": true, 00:18:55.286 "seek_hole": false, 00:18:55.286 "seek_data": false, 00:18:55.286 "copy": true, 00:18:55.286 "nvme_iov_md": false 00:18:55.286 }, 00:18:55.286 "memory_domains": [ 00:18:55.286 { 00:18:55.286 "dma_device_id": "system", 00:18:55.286 "dma_device_type": 1 00:18:55.286 }, 00:18:55.286 { 00:18:55.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.286 "dma_device_type": 2 00:18:55.286 } 00:18:55.286 ], 00:18:55.286 "driver_specific": {} 00:18:55.286 }' 00:18:55.286 04:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:55.286 04:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:55.545 04:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:55.545 04:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:55.545 04:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:55.545 04:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:55.545 04:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:55.545 04:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:55.545 04:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:55.545 04:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:55.545 04:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:55.804 04:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:55.805 04:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:55.805 [2024-07-23 04:14:04.568080] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:55.805 [2024-07-23 04:14:04.568113] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:55.805 [2024-07-23 04:14:04.568214] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.805 [2024-07-23 04:14:04.568280] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:55.805 [2024-07-23 04:14:04.568304] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041780 name Existed_Raid, state offline 00:18:55.805 04:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2662321 00:18:55.805 04:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2662321 ']' 00:18:55.805 04:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2662321 00:18:56.064 04:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:18:56.064 04:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:56.064 04:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2662321 00:18:56.064 04:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:56.064 04:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:56.064 04:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2662321' 00:18:56.064 killing process with pid 2662321 00:18:56.064 04:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2662321 00:18:56.064 [2024-07-23 04:14:04.645191] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:56.064 04:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2662321 00:18:56.323 [2024-07-23 04:14:04.962100] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:18:58.228 00:18:58.228 real 0m29.445s 00:18:58.228 user 0m51.499s 00:18:58.228 sys 0m5.231s 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.228 ************************************ 00:18:58.228 END TEST raid_state_function_test 00:18:58.228 ************************************ 00:18:58.228 04:14:06 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:58.228 04:14:06 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:18:58.228 04:14:06 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:58.228 04:14:06 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:58.228 04:14:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.228 ************************************ 00:18:58.228 START TEST raid_state_function_test_sb 00:18:58.228 ************************************ 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 true 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:58.228 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:58.229 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:58.229 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:18:58.229 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:58.229 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:58.229 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:18:58.229 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:18:58.229 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2667857 00:18:58.229 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2667857' 00:18:58.229 Process raid pid: 2667857 00:18:58.229 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:58.229 04:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2667857 /var/tmp/spdk-raid.sock 00:18:58.229 04:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2667857 ']' 00:18:58.229 04:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:58.229 04:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:58.229 04:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:58.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:58.229 04:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:58.229 04:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.229 [2024-07-23 04:14:06.860629] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:58.229 [2024-07-23 04:14:06.860740] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:58.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:58.229 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:58.488 [2024-07-23 04:14:07.091543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.747 [2024-07-23 04:14:07.389029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.006 [2024-07-23 04:14:07.717037] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:59.006 [2024-07-23 04:14:07.717070] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:59.264 04:14:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:59.264 04:14:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:18:59.264 04:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:59.522 [2024-07-23 04:14:08.094171] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:59.522 [2024-07-23 04:14:08.094226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:59.522 [2024-07-23 04:14:08.094242] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:59.522 [2024-07-23 04:14:08.094259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:59.522 [2024-07-23 04:14:08.094271] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:59.522 [2024-07-23 04:14:08.094287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:59.522 04:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:59.522 04:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:59.522 04:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:59.522 04:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:59.522 04:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:59.522 04:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:59.522 04:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:59.522 04:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:59.522 04:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:59.522 04:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:59.522 04:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.522 04:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.781 04:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:59.781 "name": "Existed_Raid", 00:18:59.781 "uuid": "9736eb9f-8185-4cc8-b377-9e90c24bbb56", 00:18:59.781 "strip_size_kb": 64, 00:18:59.781 "state": "configuring", 00:18:59.781 "raid_level": "concat", 00:18:59.781 "superblock": true, 00:18:59.781 "num_base_bdevs": 3, 00:18:59.781 "num_base_bdevs_discovered": 0, 00:18:59.781 "num_base_bdevs_operational": 3, 00:18:59.781 "base_bdevs_list": [ 00:18:59.781 { 00:18:59.781 "name": "BaseBdev1", 00:18:59.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.781 "is_configured": false, 00:18:59.781 "data_offset": 0, 00:18:59.781 "data_size": 0 00:18:59.781 }, 00:18:59.781 { 00:18:59.781 "name": "BaseBdev2", 00:18:59.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.781 "is_configured": false, 00:18:59.781 "data_offset": 0, 00:18:59.781 "data_size": 0 00:18:59.781 }, 00:18:59.781 { 00:18:59.781 "name": "BaseBdev3", 00:18:59.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.781 "is_configured": false, 00:18:59.781 "data_offset": 0, 00:18:59.781 "data_size": 0 00:18:59.781 } 00:18:59.781 ] 00:18:59.781 }' 00:18:59.781 04:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:59.781 04:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.347 04:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:00.347 [2024-07-23 04:14:09.124778] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:00.347 [2024-07-23 04:14:09.124829] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:19:00.605 04:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:00.605 [2024-07-23 04:14:09.349455] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:00.605 [2024-07-23 04:14:09.349503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:00.605 [2024-07-23 04:14:09.349517] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:00.605 [2024-07-23 04:14:09.349537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:00.605 [2024-07-23 04:14:09.349548] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:00.605 [2024-07-23 04:14:09.349564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:00.605 04:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:00.863 [2024-07-23 04:14:09.619178] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:00.863 BaseBdev1 00:19:00.863 04:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:00.863 04:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:00.863 04:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:00.863 04:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:00.863 04:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:00.863 04:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:00.863 04:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:01.121 04:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:01.380 [ 00:19:01.380 { 00:19:01.380 "name": "BaseBdev1", 00:19:01.380 "aliases": [ 00:19:01.380 "b665b16b-ec00-4303-af8a-296f6f576708" 00:19:01.380 ], 00:19:01.380 "product_name": "Malloc disk", 00:19:01.380 "block_size": 512, 00:19:01.380 "num_blocks": 65536, 00:19:01.380 "uuid": "b665b16b-ec00-4303-af8a-296f6f576708", 00:19:01.380 "assigned_rate_limits": { 00:19:01.380 "rw_ios_per_sec": 0, 00:19:01.380 "rw_mbytes_per_sec": 0, 00:19:01.380 "r_mbytes_per_sec": 0, 00:19:01.380 "w_mbytes_per_sec": 0 00:19:01.380 }, 00:19:01.380 "claimed": true, 00:19:01.380 "claim_type": "exclusive_write", 00:19:01.380 "zoned": false, 00:19:01.380 "supported_io_types": { 00:19:01.380 "read": true, 00:19:01.380 "write": true, 00:19:01.380 "unmap": true, 00:19:01.380 "flush": true, 00:19:01.380 "reset": true, 00:19:01.380 "nvme_admin": false, 00:19:01.380 "nvme_io": false, 00:19:01.380 "nvme_io_md": false, 00:19:01.380 "write_zeroes": true, 00:19:01.380 "zcopy": true, 00:19:01.380 "get_zone_info": false, 00:19:01.380 "zone_management": false, 00:19:01.380 "zone_append": false, 00:19:01.380 "compare": false, 00:19:01.380 "compare_and_write": false, 00:19:01.380 "abort": true, 00:19:01.380 "seek_hole": false, 00:19:01.380 "seek_data": false, 00:19:01.380 "copy": true, 00:19:01.380 "nvme_iov_md": false 00:19:01.380 }, 00:19:01.380 "memory_domains": [ 00:19:01.380 { 00:19:01.380 "dma_device_id": "system", 00:19:01.380 "dma_device_type": 1 00:19:01.380 }, 00:19:01.380 { 00:19:01.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.380 "dma_device_type": 2 00:19:01.380 } 00:19:01.380 ], 00:19:01.380 "driver_specific": {} 00:19:01.380 } 00:19:01.380 ] 00:19:01.380 04:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:01.380 04:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:01.380 04:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:01.380 04:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:01.380 04:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:01.380 04:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:01.380 04:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:01.380 04:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:01.380 04:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:01.380 04:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:01.380 04:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:01.381 04:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.381 04:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.639 04:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:01.639 "name": "Existed_Raid", 00:19:01.639 "uuid": "68ace1e6-0f5d-406c-b127-a959fdde21e2", 00:19:01.639 "strip_size_kb": 64, 00:19:01.639 "state": "configuring", 00:19:01.639 "raid_level": "concat", 00:19:01.639 "superblock": true, 00:19:01.639 "num_base_bdevs": 3, 00:19:01.639 "num_base_bdevs_discovered": 1, 00:19:01.639 "num_base_bdevs_operational": 3, 00:19:01.639 "base_bdevs_list": [ 00:19:01.639 { 00:19:01.639 "name": "BaseBdev1", 00:19:01.639 "uuid": "b665b16b-ec00-4303-af8a-296f6f576708", 00:19:01.639 "is_configured": true, 00:19:01.639 "data_offset": 2048, 00:19:01.639 "data_size": 63488 00:19:01.639 }, 00:19:01.639 { 00:19:01.639 "name": "BaseBdev2", 00:19:01.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.639 "is_configured": false, 00:19:01.639 "data_offset": 0, 00:19:01.639 "data_size": 0 00:19:01.639 }, 00:19:01.639 { 00:19:01.639 "name": "BaseBdev3", 00:19:01.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.639 "is_configured": false, 00:19:01.639 "data_offset": 0, 00:19:01.639 "data_size": 0 00:19:01.639 } 00:19:01.639 ] 00:19:01.639 }' 00:19:01.639 04:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:01.639 04:14:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.204 04:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:02.463 [2024-07-23 04:14:11.095301] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:02.463 [2024-07-23 04:14:11.095362] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:19:02.463 04:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:02.722 [2024-07-23 04:14:11.324028] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:02.722 [2024-07-23 04:14:11.326350] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:02.722 [2024-07-23 04:14:11.326393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:02.722 [2024-07-23 04:14:11.326408] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:02.722 [2024-07-23 04:14:11.326424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:02.722 04:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:02.722 04:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:02.722 04:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:02.722 04:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:02.722 04:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:02.722 04:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:02.722 04:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:02.722 04:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:02.722 04:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:02.722 04:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:02.722 04:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:02.722 04:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:02.722 04:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.722 04:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.981 04:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:02.981 "name": "Existed_Raid", 00:19:02.981 "uuid": "8578fa60-27f4-4bb6-930a-99aaa9d391cd", 00:19:02.981 "strip_size_kb": 64, 00:19:02.981 "state": "configuring", 00:19:02.981 "raid_level": "concat", 00:19:02.981 "superblock": true, 00:19:02.981 "num_base_bdevs": 3, 00:19:02.981 "num_base_bdevs_discovered": 1, 00:19:02.981 "num_base_bdevs_operational": 3, 00:19:02.981 "base_bdevs_list": [ 00:19:02.981 { 00:19:02.981 "name": "BaseBdev1", 00:19:02.981 "uuid": "b665b16b-ec00-4303-af8a-296f6f576708", 00:19:02.981 "is_configured": true, 00:19:02.981 "data_offset": 2048, 00:19:02.981 "data_size": 63488 00:19:02.981 }, 00:19:02.981 { 00:19:02.981 "name": "BaseBdev2", 00:19:02.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.981 "is_configured": false, 00:19:02.981 "data_offset": 0, 00:19:02.981 "data_size": 0 00:19:02.981 }, 00:19:02.981 { 00:19:02.981 "name": "BaseBdev3", 00:19:02.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.981 "is_configured": false, 00:19:02.981 "data_offset": 0, 00:19:02.981 "data_size": 0 00:19:02.981 } 00:19:02.981 ] 00:19:02.981 }' 00:19:02.981 04:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:02.981 04:14:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.549 04:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:03.808 [2024-07-23 04:14:12.378077] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:03.808 BaseBdev2 00:19:03.809 04:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:03.809 04:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:03.809 04:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:03.809 04:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:03.809 04:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:03.809 04:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:03.809 04:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:04.068 04:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:04.068 [ 00:19:04.068 { 00:19:04.068 "name": "BaseBdev2", 00:19:04.068 "aliases": [ 00:19:04.068 "b711db48-23a0-4304-8ae9-0523a77629eb" 00:19:04.068 ], 00:19:04.068 "product_name": "Malloc disk", 00:19:04.068 "block_size": 512, 00:19:04.068 "num_blocks": 65536, 00:19:04.068 "uuid": "b711db48-23a0-4304-8ae9-0523a77629eb", 00:19:04.068 "assigned_rate_limits": { 00:19:04.068 "rw_ios_per_sec": 0, 00:19:04.068 "rw_mbytes_per_sec": 0, 00:19:04.068 "r_mbytes_per_sec": 0, 00:19:04.068 "w_mbytes_per_sec": 0 00:19:04.068 }, 00:19:04.068 "claimed": true, 00:19:04.068 "claim_type": "exclusive_write", 00:19:04.068 "zoned": false, 00:19:04.068 "supported_io_types": { 00:19:04.068 "read": true, 00:19:04.068 "write": true, 00:19:04.068 "unmap": true, 00:19:04.068 "flush": true, 00:19:04.068 "reset": true, 00:19:04.068 "nvme_admin": false, 00:19:04.068 "nvme_io": false, 00:19:04.068 "nvme_io_md": false, 00:19:04.068 "write_zeroes": true, 00:19:04.068 "zcopy": true, 00:19:04.068 "get_zone_info": false, 00:19:04.068 "zone_management": false, 00:19:04.068 "zone_append": false, 00:19:04.068 "compare": false, 00:19:04.068 "compare_and_write": false, 00:19:04.068 "abort": true, 00:19:04.068 "seek_hole": false, 00:19:04.068 "seek_data": false, 00:19:04.068 "copy": true, 00:19:04.068 "nvme_iov_md": false 00:19:04.068 }, 00:19:04.068 "memory_domains": [ 00:19:04.068 { 00:19:04.068 "dma_device_id": "system", 00:19:04.068 "dma_device_type": 1 00:19:04.068 }, 00:19:04.068 { 00:19:04.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.068 "dma_device_type": 2 00:19:04.068 } 00:19:04.068 ], 00:19:04.068 "driver_specific": {} 00:19:04.068 } 00:19:04.068 ] 00:19:04.068 04:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:04.068 04:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:04.068 04:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:04.068 04:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:04.068 04:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:04.068 04:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:04.068 04:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:04.068 04:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:04.068 04:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:04.068 04:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:04.068 04:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:04.068 04:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:04.068 04:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:04.068 04:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.068 04:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.327 04:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:04.327 "name": "Existed_Raid", 00:19:04.327 "uuid": "8578fa60-27f4-4bb6-930a-99aaa9d391cd", 00:19:04.327 "strip_size_kb": 64, 00:19:04.327 "state": "configuring", 00:19:04.327 "raid_level": "concat", 00:19:04.327 "superblock": true, 00:19:04.327 "num_base_bdevs": 3, 00:19:04.327 "num_base_bdevs_discovered": 2, 00:19:04.327 "num_base_bdevs_operational": 3, 00:19:04.327 "base_bdevs_list": [ 00:19:04.327 { 00:19:04.327 "name": "BaseBdev1", 00:19:04.327 "uuid": "b665b16b-ec00-4303-af8a-296f6f576708", 00:19:04.327 "is_configured": true, 00:19:04.327 "data_offset": 2048, 00:19:04.327 "data_size": 63488 00:19:04.327 }, 00:19:04.327 { 00:19:04.327 "name": "BaseBdev2", 00:19:04.327 "uuid": "b711db48-23a0-4304-8ae9-0523a77629eb", 00:19:04.327 "is_configured": true, 00:19:04.327 "data_offset": 2048, 00:19:04.327 "data_size": 63488 00:19:04.327 }, 00:19:04.327 { 00:19:04.327 "name": "BaseBdev3", 00:19:04.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.327 "is_configured": false, 00:19:04.327 "data_offset": 0, 00:19:04.327 "data_size": 0 00:19:04.327 } 00:19:04.327 ] 00:19:04.327 }' 00:19:04.327 04:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:04.327 04:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.897 04:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:05.156 [2024-07-23 04:14:13.900833] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:05.156 [2024-07-23 04:14:13.901129] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:19:05.156 [2024-07-23 04:14:13.901168] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:05.156 [2024-07-23 04:14:13.901494] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:19:05.156 [2024-07-23 04:14:13.901739] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:19:05.156 [2024-07-23 04:14:13.901754] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:19:05.156 BaseBdev3 00:19:05.156 [2024-07-23 04:14:13.901936] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.156 04:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:05.156 04:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:05.156 04:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:05.156 04:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:05.156 04:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:05.156 04:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:05.156 04:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:05.414 04:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:05.673 [ 00:19:05.673 { 00:19:05.673 "name": "BaseBdev3", 00:19:05.673 "aliases": [ 00:19:05.673 "9006ed4e-8a0e-4884-90a2-9f7f3df25d65" 00:19:05.673 ], 00:19:05.673 "product_name": "Malloc disk", 00:19:05.673 "block_size": 512, 00:19:05.673 "num_blocks": 65536, 00:19:05.673 "uuid": "9006ed4e-8a0e-4884-90a2-9f7f3df25d65", 00:19:05.673 "assigned_rate_limits": { 00:19:05.673 "rw_ios_per_sec": 0, 00:19:05.673 "rw_mbytes_per_sec": 0, 00:19:05.673 "r_mbytes_per_sec": 0, 00:19:05.673 "w_mbytes_per_sec": 0 00:19:05.673 }, 00:19:05.673 "claimed": true, 00:19:05.673 "claim_type": "exclusive_write", 00:19:05.673 "zoned": false, 00:19:05.673 "supported_io_types": { 00:19:05.673 "read": true, 00:19:05.673 "write": true, 00:19:05.673 "unmap": true, 00:19:05.673 "flush": true, 00:19:05.673 "reset": true, 00:19:05.673 "nvme_admin": false, 00:19:05.673 "nvme_io": false, 00:19:05.673 "nvme_io_md": false, 00:19:05.673 "write_zeroes": true, 00:19:05.673 "zcopy": true, 00:19:05.673 "get_zone_info": false, 00:19:05.673 "zone_management": false, 00:19:05.673 "zone_append": false, 00:19:05.673 "compare": false, 00:19:05.673 "compare_and_write": false, 00:19:05.673 "abort": true, 00:19:05.673 "seek_hole": false, 00:19:05.673 "seek_data": false, 00:19:05.673 "copy": true, 00:19:05.673 "nvme_iov_md": false 00:19:05.673 }, 00:19:05.673 "memory_domains": [ 00:19:05.673 { 00:19:05.673 "dma_device_id": "system", 00:19:05.673 "dma_device_type": 1 00:19:05.673 }, 00:19:05.673 { 00:19:05.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.673 "dma_device_type": 2 00:19:05.673 } 00:19:05.673 ], 00:19:05.673 "driver_specific": {} 00:19:05.673 } 00:19:05.673 ] 00:19:05.673 04:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:05.673 04:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:05.673 04:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:05.673 04:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:05.673 04:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:05.673 04:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:05.673 04:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:05.673 04:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:05.673 04:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:05.673 04:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:05.673 04:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:05.673 04:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:05.673 04:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:05.673 04:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.673 04:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.932 04:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:05.932 "name": "Existed_Raid", 00:19:05.932 "uuid": "8578fa60-27f4-4bb6-930a-99aaa9d391cd", 00:19:05.932 "strip_size_kb": 64, 00:19:05.932 "state": "online", 00:19:05.932 "raid_level": "concat", 00:19:05.932 "superblock": true, 00:19:05.932 "num_base_bdevs": 3, 00:19:05.932 "num_base_bdevs_discovered": 3, 00:19:05.932 "num_base_bdevs_operational": 3, 00:19:05.932 "base_bdevs_list": [ 00:19:05.932 { 00:19:05.933 "name": "BaseBdev1", 00:19:05.933 "uuid": "b665b16b-ec00-4303-af8a-296f6f576708", 00:19:05.933 "is_configured": true, 00:19:05.933 "data_offset": 2048, 00:19:05.933 "data_size": 63488 00:19:05.933 }, 00:19:05.933 { 00:19:05.933 "name": "BaseBdev2", 00:19:05.933 "uuid": "b711db48-23a0-4304-8ae9-0523a77629eb", 00:19:05.933 "is_configured": true, 00:19:05.933 "data_offset": 2048, 00:19:05.933 "data_size": 63488 00:19:05.933 }, 00:19:05.933 { 00:19:05.933 "name": "BaseBdev3", 00:19:05.933 "uuid": "9006ed4e-8a0e-4884-90a2-9f7f3df25d65", 00:19:05.933 "is_configured": true, 00:19:05.933 "data_offset": 2048, 00:19:05.933 "data_size": 63488 00:19:05.933 } 00:19:05.933 ] 00:19:05.933 }' 00:19:05.933 04:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:05.933 04:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.500 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:06.500 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:06.500 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:06.500 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:06.500 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:06.500 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:06.500 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:06.500 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:06.789 [2024-07-23 04:14:15.385337] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.789 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:06.789 "name": "Existed_Raid", 00:19:06.789 "aliases": [ 00:19:06.789 "8578fa60-27f4-4bb6-930a-99aaa9d391cd" 00:19:06.789 ], 00:19:06.789 "product_name": "Raid Volume", 00:19:06.789 "block_size": 512, 00:19:06.789 "num_blocks": 190464, 00:19:06.789 "uuid": "8578fa60-27f4-4bb6-930a-99aaa9d391cd", 00:19:06.789 "assigned_rate_limits": { 00:19:06.789 "rw_ios_per_sec": 0, 00:19:06.789 "rw_mbytes_per_sec": 0, 00:19:06.789 "r_mbytes_per_sec": 0, 00:19:06.789 "w_mbytes_per_sec": 0 00:19:06.789 }, 00:19:06.789 "claimed": false, 00:19:06.789 "zoned": false, 00:19:06.789 "supported_io_types": { 00:19:06.789 "read": true, 00:19:06.789 "write": true, 00:19:06.789 "unmap": true, 00:19:06.789 "flush": true, 00:19:06.789 "reset": true, 00:19:06.789 "nvme_admin": false, 00:19:06.789 "nvme_io": false, 00:19:06.789 "nvme_io_md": false, 00:19:06.789 "write_zeroes": true, 00:19:06.789 "zcopy": false, 00:19:06.789 "get_zone_info": false, 00:19:06.789 "zone_management": false, 00:19:06.789 "zone_append": false, 00:19:06.789 "compare": false, 00:19:06.789 "compare_and_write": false, 00:19:06.789 "abort": false, 00:19:06.789 "seek_hole": false, 00:19:06.789 "seek_data": false, 00:19:06.789 "copy": false, 00:19:06.789 "nvme_iov_md": false 00:19:06.789 }, 00:19:06.789 "memory_domains": [ 00:19:06.789 { 00:19:06.789 "dma_device_id": "system", 00:19:06.789 "dma_device_type": 1 00:19:06.789 }, 00:19:06.789 { 00:19:06.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.789 "dma_device_type": 2 00:19:06.789 }, 00:19:06.789 { 00:19:06.789 "dma_device_id": "system", 00:19:06.789 "dma_device_type": 1 00:19:06.789 }, 00:19:06.789 { 00:19:06.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.789 "dma_device_type": 2 00:19:06.789 }, 00:19:06.789 { 00:19:06.789 "dma_device_id": "system", 00:19:06.789 "dma_device_type": 1 00:19:06.789 }, 00:19:06.789 { 00:19:06.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.789 "dma_device_type": 2 00:19:06.789 } 00:19:06.789 ], 00:19:06.789 "driver_specific": { 00:19:06.789 "raid": { 00:19:06.789 "uuid": "8578fa60-27f4-4bb6-930a-99aaa9d391cd", 00:19:06.789 "strip_size_kb": 64, 00:19:06.789 "state": "online", 00:19:06.789 "raid_level": "concat", 00:19:06.789 "superblock": true, 00:19:06.789 "num_base_bdevs": 3, 00:19:06.789 "num_base_bdevs_discovered": 3, 00:19:06.789 "num_base_bdevs_operational": 3, 00:19:06.789 "base_bdevs_list": [ 00:19:06.789 { 00:19:06.789 "name": "BaseBdev1", 00:19:06.789 "uuid": "b665b16b-ec00-4303-af8a-296f6f576708", 00:19:06.789 "is_configured": true, 00:19:06.789 "data_offset": 2048, 00:19:06.789 "data_size": 63488 00:19:06.789 }, 00:19:06.789 { 00:19:06.789 "name": "BaseBdev2", 00:19:06.789 "uuid": "b711db48-23a0-4304-8ae9-0523a77629eb", 00:19:06.790 "is_configured": true, 00:19:06.790 "data_offset": 2048, 00:19:06.790 "data_size": 63488 00:19:06.790 }, 00:19:06.790 { 00:19:06.790 "name": "BaseBdev3", 00:19:06.790 "uuid": "9006ed4e-8a0e-4884-90a2-9f7f3df25d65", 00:19:06.790 "is_configured": true, 00:19:06.790 "data_offset": 2048, 00:19:06.790 "data_size": 63488 00:19:06.790 } 00:19:06.790 ] 00:19:06.790 } 00:19:06.790 } 00:19:06.790 }' 00:19:06.790 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:06.790 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:06.790 BaseBdev2 00:19:06.790 BaseBdev3' 00:19:06.790 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:06.790 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:06.790 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:07.049 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:07.049 "name": "BaseBdev1", 00:19:07.049 "aliases": [ 00:19:07.049 "b665b16b-ec00-4303-af8a-296f6f576708" 00:19:07.049 ], 00:19:07.049 "product_name": "Malloc disk", 00:19:07.049 "block_size": 512, 00:19:07.049 "num_blocks": 65536, 00:19:07.049 "uuid": "b665b16b-ec00-4303-af8a-296f6f576708", 00:19:07.049 "assigned_rate_limits": { 00:19:07.049 "rw_ios_per_sec": 0, 00:19:07.049 "rw_mbytes_per_sec": 0, 00:19:07.049 "r_mbytes_per_sec": 0, 00:19:07.049 "w_mbytes_per_sec": 0 00:19:07.049 }, 00:19:07.049 "claimed": true, 00:19:07.049 "claim_type": "exclusive_write", 00:19:07.049 "zoned": false, 00:19:07.049 "supported_io_types": { 00:19:07.049 "read": true, 00:19:07.049 "write": true, 00:19:07.049 "unmap": true, 00:19:07.049 "flush": true, 00:19:07.049 "reset": true, 00:19:07.049 "nvme_admin": false, 00:19:07.049 "nvme_io": false, 00:19:07.049 "nvme_io_md": false, 00:19:07.049 "write_zeroes": true, 00:19:07.049 "zcopy": true, 00:19:07.049 "get_zone_info": false, 00:19:07.049 "zone_management": false, 00:19:07.049 "zone_append": false, 00:19:07.049 "compare": false, 00:19:07.049 "compare_and_write": false, 00:19:07.049 "abort": true, 00:19:07.049 "seek_hole": false, 00:19:07.049 "seek_data": false, 00:19:07.049 "copy": true, 00:19:07.049 "nvme_iov_md": false 00:19:07.049 }, 00:19:07.049 "memory_domains": [ 00:19:07.049 { 00:19:07.049 "dma_device_id": "system", 00:19:07.049 "dma_device_type": 1 00:19:07.049 }, 00:19:07.049 { 00:19:07.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.049 "dma_device_type": 2 00:19:07.049 } 00:19:07.049 ], 00:19:07.049 "driver_specific": {} 00:19:07.049 }' 00:19:07.049 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.049 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.049 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:07.049 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.049 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.308 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:07.308 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.308 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.308 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:07.308 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:07.308 04:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:07.308 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:07.308 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:07.308 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:07.308 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:07.567 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:07.567 "name": "BaseBdev2", 00:19:07.567 "aliases": [ 00:19:07.567 "b711db48-23a0-4304-8ae9-0523a77629eb" 00:19:07.567 ], 00:19:07.567 "product_name": "Malloc disk", 00:19:07.567 "block_size": 512, 00:19:07.567 "num_blocks": 65536, 00:19:07.567 "uuid": "b711db48-23a0-4304-8ae9-0523a77629eb", 00:19:07.567 "assigned_rate_limits": { 00:19:07.567 "rw_ios_per_sec": 0, 00:19:07.567 "rw_mbytes_per_sec": 0, 00:19:07.567 "r_mbytes_per_sec": 0, 00:19:07.567 "w_mbytes_per_sec": 0 00:19:07.567 }, 00:19:07.567 "claimed": true, 00:19:07.567 "claim_type": "exclusive_write", 00:19:07.567 "zoned": false, 00:19:07.567 "supported_io_types": { 00:19:07.567 "read": true, 00:19:07.567 "write": true, 00:19:07.567 "unmap": true, 00:19:07.567 "flush": true, 00:19:07.567 "reset": true, 00:19:07.567 "nvme_admin": false, 00:19:07.567 "nvme_io": false, 00:19:07.567 "nvme_io_md": false, 00:19:07.567 "write_zeroes": true, 00:19:07.567 "zcopy": true, 00:19:07.567 "get_zone_info": false, 00:19:07.567 "zone_management": false, 00:19:07.567 "zone_append": false, 00:19:07.567 "compare": false, 00:19:07.567 "compare_and_write": false, 00:19:07.567 "abort": true, 00:19:07.567 "seek_hole": false, 00:19:07.567 "seek_data": false, 00:19:07.567 "copy": true, 00:19:07.567 "nvme_iov_md": false 00:19:07.567 }, 00:19:07.567 "memory_domains": [ 00:19:07.567 { 00:19:07.567 "dma_device_id": "system", 00:19:07.567 "dma_device_type": 1 00:19:07.567 }, 00:19:07.567 { 00:19:07.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.567 "dma_device_type": 2 00:19:07.567 } 00:19:07.567 ], 00:19:07.567 "driver_specific": {} 00:19:07.567 }' 00:19:07.567 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.567 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.567 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:07.567 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.826 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.826 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:07.826 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.826 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.826 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:07.826 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:07.826 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:07.826 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:07.826 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:07.826 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:07.826 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:08.085 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:08.085 "name": "BaseBdev3", 00:19:08.085 "aliases": [ 00:19:08.085 "9006ed4e-8a0e-4884-90a2-9f7f3df25d65" 00:19:08.085 ], 00:19:08.085 "product_name": "Malloc disk", 00:19:08.085 "block_size": 512, 00:19:08.085 "num_blocks": 65536, 00:19:08.085 "uuid": "9006ed4e-8a0e-4884-90a2-9f7f3df25d65", 00:19:08.085 "assigned_rate_limits": { 00:19:08.085 "rw_ios_per_sec": 0, 00:19:08.085 "rw_mbytes_per_sec": 0, 00:19:08.085 "r_mbytes_per_sec": 0, 00:19:08.085 "w_mbytes_per_sec": 0 00:19:08.085 }, 00:19:08.085 "claimed": true, 00:19:08.085 "claim_type": "exclusive_write", 00:19:08.085 "zoned": false, 00:19:08.085 "supported_io_types": { 00:19:08.085 "read": true, 00:19:08.085 "write": true, 00:19:08.085 "unmap": true, 00:19:08.085 "flush": true, 00:19:08.085 "reset": true, 00:19:08.085 "nvme_admin": false, 00:19:08.085 "nvme_io": false, 00:19:08.085 "nvme_io_md": false, 00:19:08.085 "write_zeroes": true, 00:19:08.085 "zcopy": true, 00:19:08.085 "get_zone_info": false, 00:19:08.085 "zone_management": false, 00:19:08.085 "zone_append": false, 00:19:08.085 "compare": false, 00:19:08.085 "compare_and_write": false, 00:19:08.085 "abort": true, 00:19:08.086 "seek_hole": false, 00:19:08.086 "seek_data": false, 00:19:08.086 "copy": true, 00:19:08.086 "nvme_iov_md": false 00:19:08.086 }, 00:19:08.086 "memory_domains": [ 00:19:08.086 { 00:19:08.086 "dma_device_id": "system", 00:19:08.086 "dma_device_type": 1 00:19:08.086 }, 00:19:08.086 { 00:19:08.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.086 "dma_device_type": 2 00:19:08.086 } 00:19:08.086 ], 00:19:08.086 "driver_specific": {} 00:19:08.086 }' 00:19:08.086 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:08.086 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:08.344 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:08.344 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:08.344 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:08.344 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:08.344 04:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:08.344 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:08.344 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:08.344 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.344 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.604 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:08.604 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:08.604 [2024-07-23 04:14:17.346352] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:08.604 [2024-07-23 04:14:17.346389] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:08.604 [2024-07-23 04:14:17.346454] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:08.863 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:08.863 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:19:08.863 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:08.863 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:19:08.863 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:08.863 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:19:08.863 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:08.863 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:08.863 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:08.863 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:08.863 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:08.863 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:08.863 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:08.863 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:08.863 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:08.863 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.863 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.863 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:08.863 "name": "Existed_Raid", 00:19:08.863 "uuid": "8578fa60-27f4-4bb6-930a-99aaa9d391cd", 00:19:08.863 "strip_size_kb": 64, 00:19:08.863 "state": "offline", 00:19:08.863 "raid_level": "concat", 00:19:08.863 "superblock": true, 00:19:08.863 "num_base_bdevs": 3, 00:19:08.863 "num_base_bdevs_discovered": 2, 00:19:08.863 "num_base_bdevs_operational": 2, 00:19:08.863 "base_bdevs_list": [ 00:19:08.863 { 00:19:08.863 "name": null, 00:19:08.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.863 "is_configured": false, 00:19:08.863 "data_offset": 2048, 00:19:08.863 "data_size": 63488 00:19:08.863 }, 00:19:08.863 { 00:19:08.863 "name": "BaseBdev2", 00:19:08.863 "uuid": "b711db48-23a0-4304-8ae9-0523a77629eb", 00:19:08.863 "is_configured": true, 00:19:08.863 "data_offset": 2048, 00:19:08.863 "data_size": 63488 00:19:08.863 }, 00:19:08.863 { 00:19:08.863 "name": "BaseBdev3", 00:19:08.863 "uuid": "9006ed4e-8a0e-4884-90a2-9f7f3df25d65", 00:19:08.863 "is_configured": true, 00:19:08.863 "data_offset": 2048, 00:19:08.863 "data_size": 63488 00:19:08.863 } 00:19:08.863 ] 00:19:08.863 }' 00:19:08.863 04:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:08.863 04:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.431 04:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:09.431 04:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:09.691 04:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.691 04:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:09.691 04:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:09.691 04:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:09.691 04:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:09.950 [2024-07-23 04:14:18.641495] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:10.209 04:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:10.209 04:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:10.209 04:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.209 04:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:10.467 04:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:10.467 04:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:10.467 04:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:10.467 [2024-07-23 04:14:19.222076] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:10.467 [2024-07-23 04:14:19.222153] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:19:10.726 04:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:10.726 04:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:10.726 04:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:10.726 04:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.985 04:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:10.985 04:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:10.985 04:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:19:10.985 04:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:10.985 04:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:10.985 04:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:11.243 BaseBdev2 00:19:11.243 04:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:11.243 04:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:11.243 04:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:11.243 04:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:11.243 04:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:11.243 04:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:11.243 04:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:11.501 04:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:11.760 [ 00:19:11.760 { 00:19:11.760 "name": "BaseBdev2", 00:19:11.760 "aliases": [ 00:19:11.760 "afce7b02-29fe-4c23-961b-95f9db6d5c44" 00:19:11.760 ], 00:19:11.760 "product_name": "Malloc disk", 00:19:11.760 "block_size": 512, 00:19:11.760 "num_blocks": 65536, 00:19:11.760 "uuid": "afce7b02-29fe-4c23-961b-95f9db6d5c44", 00:19:11.760 "assigned_rate_limits": { 00:19:11.760 "rw_ios_per_sec": 0, 00:19:11.760 "rw_mbytes_per_sec": 0, 00:19:11.760 "r_mbytes_per_sec": 0, 00:19:11.760 "w_mbytes_per_sec": 0 00:19:11.760 }, 00:19:11.760 "claimed": false, 00:19:11.760 "zoned": false, 00:19:11.760 "supported_io_types": { 00:19:11.760 "read": true, 00:19:11.760 "write": true, 00:19:11.760 "unmap": true, 00:19:11.760 "flush": true, 00:19:11.760 "reset": true, 00:19:11.760 "nvme_admin": false, 00:19:11.760 "nvme_io": false, 00:19:11.760 "nvme_io_md": false, 00:19:11.760 "write_zeroes": true, 00:19:11.760 "zcopy": true, 00:19:11.760 "get_zone_info": false, 00:19:11.760 "zone_management": false, 00:19:11.760 "zone_append": false, 00:19:11.760 "compare": false, 00:19:11.760 "compare_and_write": false, 00:19:11.760 "abort": true, 00:19:11.760 "seek_hole": false, 00:19:11.760 "seek_data": false, 00:19:11.760 "copy": true, 00:19:11.760 "nvme_iov_md": false 00:19:11.760 }, 00:19:11.760 "memory_domains": [ 00:19:11.760 { 00:19:11.760 "dma_device_id": "system", 00:19:11.760 "dma_device_type": 1 00:19:11.760 }, 00:19:11.760 { 00:19:11.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.760 "dma_device_type": 2 00:19:11.760 } 00:19:11.760 ], 00:19:11.760 "driver_specific": {} 00:19:11.760 } 00:19:11.760 ] 00:19:11.760 04:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:11.760 04:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:11.760 04:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:11.760 04:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:12.019 BaseBdev3 00:19:12.019 04:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:12.019 04:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:12.019 04:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:12.019 04:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:12.019 04:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:12.019 04:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:12.019 04:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:12.278 04:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:12.278 [ 00:19:12.278 { 00:19:12.278 "name": "BaseBdev3", 00:19:12.278 "aliases": [ 00:19:12.278 "3327088a-5bb9-46fc-a4bf-bd4ec655f7a5" 00:19:12.278 ], 00:19:12.278 "product_name": "Malloc disk", 00:19:12.278 "block_size": 512, 00:19:12.278 "num_blocks": 65536, 00:19:12.278 "uuid": "3327088a-5bb9-46fc-a4bf-bd4ec655f7a5", 00:19:12.278 "assigned_rate_limits": { 00:19:12.278 "rw_ios_per_sec": 0, 00:19:12.278 "rw_mbytes_per_sec": 0, 00:19:12.278 "r_mbytes_per_sec": 0, 00:19:12.278 "w_mbytes_per_sec": 0 00:19:12.278 }, 00:19:12.278 "claimed": false, 00:19:12.278 "zoned": false, 00:19:12.278 "supported_io_types": { 00:19:12.278 "read": true, 00:19:12.278 "write": true, 00:19:12.278 "unmap": true, 00:19:12.278 "flush": true, 00:19:12.278 "reset": true, 00:19:12.278 "nvme_admin": false, 00:19:12.278 "nvme_io": false, 00:19:12.278 "nvme_io_md": false, 00:19:12.278 "write_zeroes": true, 00:19:12.278 "zcopy": true, 00:19:12.278 "get_zone_info": false, 00:19:12.278 "zone_management": false, 00:19:12.278 "zone_append": false, 00:19:12.278 "compare": false, 00:19:12.278 "compare_and_write": false, 00:19:12.278 "abort": true, 00:19:12.278 "seek_hole": false, 00:19:12.278 "seek_data": false, 00:19:12.278 "copy": true, 00:19:12.278 "nvme_iov_md": false 00:19:12.278 }, 00:19:12.278 "memory_domains": [ 00:19:12.278 { 00:19:12.278 "dma_device_id": "system", 00:19:12.278 "dma_device_type": 1 00:19:12.278 }, 00:19:12.278 { 00:19:12.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.278 "dma_device_type": 2 00:19:12.278 } 00:19:12.278 ], 00:19:12.278 "driver_specific": {} 00:19:12.278 } 00:19:12.278 ] 00:19:12.278 04:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:12.278 04:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:12.278 04:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:12.278 04:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:12.537 [2024-07-23 04:14:21.249121] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:12.537 [2024-07-23 04:14:21.249188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:12.537 [2024-07-23 04:14:21.249223] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:12.537 [2024-07-23 04:14:21.251585] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:12.537 04:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:12.537 04:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:12.537 04:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:12.537 04:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:12.537 04:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:12.537 04:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:12.537 04:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:12.537 04:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:12.537 04:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:12.537 04:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:12.537 04:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.537 04:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.796 04:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:12.796 "name": "Existed_Raid", 00:19:12.796 "uuid": "8954f0de-105a-40bb-b551-80a53f211639", 00:19:12.796 "strip_size_kb": 64, 00:19:12.796 "state": "configuring", 00:19:12.796 "raid_level": "concat", 00:19:12.796 "superblock": true, 00:19:12.796 "num_base_bdevs": 3, 00:19:12.796 "num_base_bdevs_discovered": 2, 00:19:12.796 "num_base_bdevs_operational": 3, 00:19:12.796 "base_bdevs_list": [ 00:19:12.796 { 00:19:12.796 "name": "BaseBdev1", 00:19:12.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.796 "is_configured": false, 00:19:12.796 "data_offset": 0, 00:19:12.796 "data_size": 0 00:19:12.796 }, 00:19:12.796 { 00:19:12.796 "name": "BaseBdev2", 00:19:12.796 "uuid": "afce7b02-29fe-4c23-961b-95f9db6d5c44", 00:19:12.796 "is_configured": true, 00:19:12.796 "data_offset": 2048, 00:19:12.796 "data_size": 63488 00:19:12.796 }, 00:19:12.796 { 00:19:12.796 "name": "BaseBdev3", 00:19:12.796 "uuid": "3327088a-5bb9-46fc-a4bf-bd4ec655f7a5", 00:19:12.796 "is_configured": true, 00:19:12.796 "data_offset": 2048, 00:19:12.796 "data_size": 63488 00:19:12.796 } 00:19:12.796 ] 00:19:12.796 }' 00:19:12.796 04:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:12.796 04:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.363 04:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:13.623 [2024-07-23 04:14:22.279908] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:13.623 04:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:13.623 04:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:13.623 04:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:13.623 04:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:13.623 04:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:13.623 04:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:13.623 04:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:13.623 04:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:13.623 04:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:13.623 04:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:13.623 04:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.623 04:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.882 04:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:13.882 "name": "Existed_Raid", 00:19:13.882 "uuid": "8954f0de-105a-40bb-b551-80a53f211639", 00:19:13.882 "strip_size_kb": 64, 00:19:13.882 "state": "configuring", 00:19:13.882 "raid_level": "concat", 00:19:13.882 "superblock": true, 00:19:13.882 "num_base_bdevs": 3, 00:19:13.882 "num_base_bdevs_discovered": 1, 00:19:13.882 "num_base_bdevs_operational": 3, 00:19:13.882 "base_bdevs_list": [ 00:19:13.882 { 00:19:13.882 "name": "BaseBdev1", 00:19:13.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.882 "is_configured": false, 00:19:13.882 "data_offset": 0, 00:19:13.882 "data_size": 0 00:19:13.882 }, 00:19:13.882 { 00:19:13.882 "name": null, 00:19:13.882 "uuid": "afce7b02-29fe-4c23-961b-95f9db6d5c44", 00:19:13.882 "is_configured": false, 00:19:13.882 "data_offset": 2048, 00:19:13.882 "data_size": 63488 00:19:13.882 }, 00:19:13.882 { 00:19:13.882 "name": "BaseBdev3", 00:19:13.882 "uuid": "3327088a-5bb9-46fc-a4bf-bd4ec655f7a5", 00:19:13.882 "is_configured": true, 00:19:13.882 "data_offset": 2048, 00:19:13.882 "data_size": 63488 00:19:13.882 } 00:19:13.882 ] 00:19:13.882 }' 00:19:13.882 04:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:13.882 04:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.449 04:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.449 04:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:14.707 04:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:14.707 04:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:14.966 [2024-07-23 04:14:23.573029] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:14.966 BaseBdev1 00:19:14.966 04:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:14.966 04:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:14.966 04:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:14.966 04:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:14.966 04:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:14.966 04:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:14.966 04:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:15.224 04:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:15.483 [ 00:19:15.483 { 00:19:15.483 "name": "BaseBdev1", 00:19:15.483 "aliases": [ 00:19:15.483 "b0e159ca-e8d1-4071-a4ce-22e8c4ec5b4a" 00:19:15.483 ], 00:19:15.483 "product_name": "Malloc disk", 00:19:15.483 "block_size": 512, 00:19:15.483 "num_blocks": 65536, 00:19:15.483 "uuid": "b0e159ca-e8d1-4071-a4ce-22e8c4ec5b4a", 00:19:15.483 "assigned_rate_limits": { 00:19:15.483 "rw_ios_per_sec": 0, 00:19:15.483 "rw_mbytes_per_sec": 0, 00:19:15.483 "r_mbytes_per_sec": 0, 00:19:15.483 "w_mbytes_per_sec": 0 00:19:15.483 }, 00:19:15.483 "claimed": true, 00:19:15.483 "claim_type": "exclusive_write", 00:19:15.483 "zoned": false, 00:19:15.483 "supported_io_types": { 00:19:15.483 "read": true, 00:19:15.483 "write": true, 00:19:15.483 "unmap": true, 00:19:15.483 "flush": true, 00:19:15.483 "reset": true, 00:19:15.483 "nvme_admin": false, 00:19:15.483 "nvme_io": false, 00:19:15.483 "nvme_io_md": false, 00:19:15.483 "write_zeroes": true, 00:19:15.483 "zcopy": true, 00:19:15.483 "get_zone_info": false, 00:19:15.483 "zone_management": false, 00:19:15.483 "zone_append": false, 00:19:15.483 "compare": false, 00:19:15.483 "compare_and_write": false, 00:19:15.483 "abort": true, 00:19:15.484 "seek_hole": false, 00:19:15.484 "seek_data": false, 00:19:15.484 "copy": true, 00:19:15.484 "nvme_iov_md": false 00:19:15.484 }, 00:19:15.484 "memory_domains": [ 00:19:15.484 { 00:19:15.484 "dma_device_id": "system", 00:19:15.484 "dma_device_type": 1 00:19:15.484 }, 00:19:15.484 { 00:19:15.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.484 "dma_device_type": 2 00:19:15.484 } 00:19:15.484 ], 00:19:15.484 "driver_specific": {} 00:19:15.484 } 00:19:15.484 ] 00:19:15.484 04:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:15.484 04:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:15.484 04:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:15.484 04:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:15.484 04:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:15.484 04:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:15.484 04:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:15.484 04:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:15.484 04:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:15.484 04:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:15.484 04:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:15.484 04:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.484 04:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.743 04:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:15.743 "name": "Existed_Raid", 00:19:15.743 "uuid": "8954f0de-105a-40bb-b551-80a53f211639", 00:19:15.743 "strip_size_kb": 64, 00:19:15.743 "state": "configuring", 00:19:15.743 "raid_level": "concat", 00:19:15.743 "superblock": true, 00:19:15.743 "num_base_bdevs": 3, 00:19:15.743 "num_base_bdevs_discovered": 2, 00:19:15.743 "num_base_bdevs_operational": 3, 00:19:15.743 "base_bdevs_list": [ 00:19:15.743 { 00:19:15.743 "name": "BaseBdev1", 00:19:15.743 "uuid": "b0e159ca-e8d1-4071-a4ce-22e8c4ec5b4a", 00:19:15.743 "is_configured": true, 00:19:15.743 "data_offset": 2048, 00:19:15.743 "data_size": 63488 00:19:15.743 }, 00:19:15.743 { 00:19:15.743 "name": null, 00:19:15.743 "uuid": "afce7b02-29fe-4c23-961b-95f9db6d5c44", 00:19:15.743 "is_configured": false, 00:19:15.743 "data_offset": 2048, 00:19:15.743 "data_size": 63488 00:19:15.743 }, 00:19:15.743 { 00:19:15.743 "name": "BaseBdev3", 00:19:15.743 "uuid": "3327088a-5bb9-46fc-a4bf-bd4ec655f7a5", 00:19:15.743 "is_configured": true, 00:19:15.743 "data_offset": 2048, 00:19:15.743 "data_size": 63488 00:19:15.743 } 00:19:15.743 ] 00:19:15.743 }' 00:19:15.743 04:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:15.743 04:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.310 04:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:16.310 04:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.310 04:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:16.310 04:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:16.569 [2024-07-23 04:14:25.241862] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:16.569 04:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:16.569 04:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:16.569 04:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:16.569 04:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:16.569 04:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:16.569 04:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:16.569 04:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:16.569 04:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:16.569 04:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:16.569 04:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:16.569 04:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.569 04:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.828 04:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:16.828 "name": "Existed_Raid", 00:19:16.828 "uuid": "8954f0de-105a-40bb-b551-80a53f211639", 00:19:16.828 "strip_size_kb": 64, 00:19:16.828 "state": "configuring", 00:19:16.828 "raid_level": "concat", 00:19:16.828 "superblock": true, 00:19:16.828 "num_base_bdevs": 3, 00:19:16.828 "num_base_bdevs_discovered": 1, 00:19:16.828 "num_base_bdevs_operational": 3, 00:19:16.828 "base_bdevs_list": [ 00:19:16.828 { 00:19:16.828 "name": "BaseBdev1", 00:19:16.828 "uuid": "b0e159ca-e8d1-4071-a4ce-22e8c4ec5b4a", 00:19:16.828 "is_configured": true, 00:19:16.828 "data_offset": 2048, 00:19:16.828 "data_size": 63488 00:19:16.828 }, 00:19:16.828 { 00:19:16.828 "name": null, 00:19:16.828 "uuid": "afce7b02-29fe-4c23-961b-95f9db6d5c44", 00:19:16.828 "is_configured": false, 00:19:16.828 "data_offset": 2048, 00:19:16.828 "data_size": 63488 00:19:16.828 }, 00:19:16.828 { 00:19:16.828 "name": null, 00:19:16.828 "uuid": "3327088a-5bb9-46fc-a4bf-bd4ec655f7a5", 00:19:16.828 "is_configured": false, 00:19:16.828 "data_offset": 2048, 00:19:16.828 "data_size": 63488 00:19:16.828 } 00:19:16.828 ] 00:19:16.828 }' 00:19:16.828 04:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:16.828 04:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.396 04:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.396 04:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:17.655 04:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:17.655 04:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:17.914 [2024-07-23 04:14:26.497273] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:17.914 04:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:17.914 04:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:17.914 04:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:17.914 04:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:17.914 04:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:17.914 04:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:17.914 04:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:17.914 04:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:17.915 04:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:17.915 04:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:17.915 04:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.915 04:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.173 04:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:18.173 "name": "Existed_Raid", 00:19:18.173 "uuid": "8954f0de-105a-40bb-b551-80a53f211639", 00:19:18.173 "strip_size_kb": 64, 00:19:18.173 "state": "configuring", 00:19:18.173 "raid_level": "concat", 00:19:18.173 "superblock": true, 00:19:18.173 "num_base_bdevs": 3, 00:19:18.173 "num_base_bdevs_discovered": 2, 00:19:18.173 "num_base_bdevs_operational": 3, 00:19:18.173 "base_bdevs_list": [ 00:19:18.173 { 00:19:18.173 "name": "BaseBdev1", 00:19:18.173 "uuid": "b0e159ca-e8d1-4071-a4ce-22e8c4ec5b4a", 00:19:18.173 "is_configured": true, 00:19:18.173 "data_offset": 2048, 00:19:18.173 "data_size": 63488 00:19:18.173 }, 00:19:18.173 { 00:19:18.173 "name": null, 00:19:18.173 "uuid": "afce7b02-29fe-4c23-961b-95f9db6d5c44", 00:19:18.173 "is_configured": false, 00:19:18.173 "data_offset": 2048, 00:19:18.173 "data_size": 63488 00:19:18.173 }, 00:19:18.173 { 00:19:18.173 "name": "BaseBdev3", 00:19:18.173 "uuid": "3327088a-5bb9-46fc-a4bf-bd4ec655f7a5", 00:19:18.173 "is_configured": true, 00:19:18.173 "data_offset": 2048, 00:19:18.173 "data_size": 63488 00:19:18.174 } 00:19:18.174 ] 00:19:18.174 }' 00:19:18.174 04:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:18.174 04:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.741 04:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.741 04:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:18.741 04:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:18.741 04:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:18.999 [2024-07-23 04:14:27.712596] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:19.258 04:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:19.258 04:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:19.258 04:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:19.258 04:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:19.258 04:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:19.258 04:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:19.258 04:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:19.258 04:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:19.258 04:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:19.258 04:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:19.258 04:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.258 04:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.516 04:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:19.516 "name": "Existed_Raid", 00:19:19.516 "uuid": "8954f0de-105a-40bb-b551-80a53f211639", 00:19:19.516 "strip_size_kb": 64, 00:19:19.516 "state": "configuring", 00:19:19.516 "raid_level": "concat", 00:19:19.516 "superblock": true, 00:19:19.516 "num_base_bdevs": 3, 00:19:19.516 "num_base_bdevs_discovered": 1, 00:19:19.516 "num_base_bdevs_operational": 3, 00:19:19.516 "base_bdevs_list": [ 00:19:19.516 { 00:19:19.516 "name": null, 00:19:19.516 "uuid": "b0e159ca-e8d1-4071-a4ce-22e8c4ec5b4a", 00:19:19.516 "is_configured": false, 00:19:19.516 "data_offset": 2048, 00:19:19.516 "data_size": 63488 00:19:19.516 }, 00:19:19.516 { 00:19:19.516 "name": null, 00:19:19.516 "uuid": "afce7b02-29fe-4c23-961b-95f9db6d5c44", 00:19:19.516 "is_configured": false, 00:19:19.516 "data_offset": 2048, 00:19:19.516 "data_size": 63488 00:19:19.516 }, 00:19:19.516 { 00:19:19.516 "name": "BaseBdev3", 00:19:19.516 "uuid": "3327088a-5bb9-46fc-a4bf-bd4ec655f7a5", 00:19:19.516 "is_configured": true, 00:19:19.516 "data_offset": 2048, 00:19:19.516 "data_size": 63488 00:19:19.516 } 00:19:19.516 ] 00:19:19.516 }' 00:19:19.516 04:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:19.516 04:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.144 04:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.144 04:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:20.144 04:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:20.144 04:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:20.402 [2024-07-23 04:14:29.109026] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:20.402 04:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:20.402 04:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:20.402 04:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:20.402 04:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:20.402 04:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:20.402 04:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:20.402 04:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:20.402 04:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:20.402 04:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:20.402 04:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:20.402 04:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.402 04:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.661 04:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:20.661 "name": "Existed_Raid", 00:19:20.661 "uuid": "8954f0de-105a-40bb-b551-80a53f211639", 00:19:20.661 "strip_size_kb": 64, 00:19:20.661 "state": "configuring", 00:19:20.661 "raid_level": "concat", 00:19:20.661 "superblock": true, 00:19:20.661 "num_base_bdevs": 3, 00:19:20.661 "num_base_bdevs_discovered": 2, 00:19:20.661 "num_base_bdevs_operational": 3, 00:19:20.661 "base_bdevs_list": [ 00:19:20.661 { 00:19:20.661 "name": null, 00:19:20.661 "uuid": "b0e159ca-e8d1-4071-a4ce-22e8c4ec5b4a", 00:19:20.661 "is_configured": false, 00:19:20.661 "data_offset": 2048, 00:19:20.661 "data_size": 63488 00:19:20.661 }, 00:19:20.661 { 00:19:20.661 "name": "BaseBdev2", 00:19:20.661 "uuid": "afce7b02-29fe-4c23-961b-95f9db6d5c44", 00:19:20.661 "is_configured": true, 00:19:20.661 "data_offset": 2048, 00:19:20.661 "data_size": 63488 00:19:20.661 }, 00:19:20.661 { 00:19:20.661 "name": "BaseBdev3", 00:19:20.661 "uuid": "3327088a-5bb9-46fc-a4bf-bd4ec655f7a5", 00:19:20.661 "is_configured": true, 00:19:20.661 "data_offset": 2048, 00:19:20.661 "data_size": 63488 00:19:20.661 } 00:19:20.661 ] 00:19:20.661 }' 00:19:20.661 04:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:20.661 04:14:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.227 04:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:21.227 04:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.486 04:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:21.486 04:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.486 04:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:21.744 04:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b0e159ca-e8d1-4071-a4ce-22e8c4ec5b4a 00:19:22.002 [2024-07-23 04:14:30.667298] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:22.002 [2024-07-23 04:14:30.667560] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041780 00:19:22.002 [2024-07-23 04:14:30.667585] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:22.002 [2024-07-23 04:14:30.667894] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:19:22.002 [2024-07-23 04:14:30.668121] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041780 00:19:22.002 [2024-07-23 04:14:30.668136] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000041780 00:19:22.002 NewBaseBdev 00:19:22.002 [2024-07-23 04:14:30.668342] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.002 04:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:22.002 04:14:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:19:22.003 04:14:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:22.003 04:14:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:22.003 04:14:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:22.003 04:14:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:22.003 04:14:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:22.260 04:14:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:22.518 [ 00:19:22.518 { 00:19:22.518 "name": "NewBaseBdev", 00:19:22.518 "aliases": [ 00:19:22.518 "b0e159ca-e8d1-4071-a4ce-22e8c4ec5b4a" 00:19:22.518 ], 00:19:22.518 "product_name": "Malloc disk", 00:19:22.518 "block_size": 512, 00:19:22.518 "num_blocks": 65536, 00:19:22.518 "uuid": "b0e159ca-e8d1-4071-a4ce-22e8c4ec5b4a", 00:19:22.518 "assigned_rate_limits": { 00:19:22.519 "rw_ios_per_sec": 0, 00:19:22.519 "rw_mbytes_per_sec": 0, 00:19:22.519 "r_mbytes_per_sec": 0, 00:19:22.519 "w_mbytes_per_sec": 0 00:19:22.519 }, 00:19:22.519 "claimed": true, 00:19:22.519 "claim_type": "exclusive_write", 00:19:22.519 "zoned": false, 00:19:22.519 "supported_io_types": { 00:19:22.519 "read": true, 00:19:22.519 "write": true, 00:19:22.519 "unmap": true, 00:19:22.519 "flush": true, 00:19:22.519 "reset": true, 00:19:22.519 "nvme_admin": false, 00:19:22.519 "nvme_io": false, 00:19:22.519 "nvme_io_md": false, 00:19:22.519 "write_zeroes": true, 00:19:22.519 "zcopy": true, 00:19:22.519 "get_zone_info": false, 00:19:22.519 "zone_management": false, 00:19:22.519 "zone_append": false, 00:19:22.519 "compare": false, 00:19:22.519 "compare_and_write": false, 00:19:22.519 "abort": true, 00:19:22.519 "seek_hole": false, 00:19:22.519 "seek_data": false, 00:19:22.519 "copy": true, 00:19:22.519 "nvme_iov_md": false 00:19:22.519 }, 00:19:22.519 "memory_domains": [ 00:19:22.519 { 00:19:22.519 "dma_device_id": "system", 00:19:22.519 "dma_device_type": 1 00:19:22.519 }, 00:19:22.519 { 00:19:22.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.519 "dma_device_type": 2 00:19:22.519 } 00:19:22.519 ], 00:19:22.519 "driver_specific": {} 00:19:22.519 } 00:19:22.519 ] 00:19:22.519 04:14:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:22.519 04:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:22.519 04:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:22.519 04:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:22.519 04:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:22.519 04:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:22.519 04:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:22.519 04:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:22.519 04:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:22.519 04:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:22.519 04:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:22.519 04:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.519 04:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:22.777 04:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:22.777 "name": "Existed_Raid", 00:19:22.777 "uuid": "8954f0de-105a-40bb-b551-80a53f211639", 00:19:22.777 "strip_size_kb": 64, 00:19:22.777 "state": "online", 00:19:22.777 "raid_level": "concat", 00:19:22.777 "superblock": true, 00:19:22.777 "num_base_bdevs": 3, 00:19:22.777 "num_base_bdevs_discovered": 3, 00:19:22.777 "num_base_bdevs_operational": 3, 00:19:22.777 "base_bdevs_list": [ 00:19:22.777 { 00:19:22.777 "name": "NewBaseBdev", 00:19:22.777 "uuid": "b0e159ca-e8d1-4071-a4ce-22e8c4ec5b4a", 00:19:22.777 "is_configured": true, 00:19:22.777 "data_offset": 2048, 00:19:22.777 "data_size": 63488 00:19:22.777 }, 00:19:22.777 { 00:19:22.777 "name": "BaseBdev2", 00:19:22.777 "uuid": "afce7b02-29fe-4c23-961b-95f9db6d5c44", 00:19:22.777 "is_configured": true, 00:19:22.777 "data_offset": 2048, 00:19:22.777 "data_size": 63488 00:19:22.777 }, 00:19:22.777 { 00:19:22.777 "name": "BaseBdev3", 00:19:22.777 "uuid": "3327088a-5bb9-46fc-a4bf-bd4ec655f7a5", 00:19:22.777 "is_configured": true, 00:19:22.777 "data_offset": 2048, 00:19:22.777 "data_size": 63488 00:19:22.777 } 00:19:22.777 ] 00:19:22.777 }' 00:19:22.777 04:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:22.777 04:14:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.344 04:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:23.344 04:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:23.344 04:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:23.344 04:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:23.344 04:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:23.344 04:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:23.344 04:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:23.344 04:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:23.603 [2024-07-23 04:14:32.159781] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:23.603 04:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:23.603 "name": "Existed_Raid", 00:19:23.603 "aliases": [ 00:19:23.603 "8954f0de-105a-40bb-b551-80a53f211639" 00:19:23.603 ], 00:19:23.603 "product_name": "Raid Volume", 00:19:23.603 "block_size": 512, 00:19:23.603 "num_blocks": 190464, 00:19:23.603 "uuid": "8954f0de-105a-40bb-b551-80a53f211639", 00:19:23.603 "assigned_rate_limits": { 00:19:23.603 "rw_ios_per_sec": 0, 00:19:23.603 "rw_mbytes_per_sec": 0, 00:19:23.603 "r_mbytes_per_sec": 0, 00:19:23.603 "w_mbytes_per_sec": 0 00:19:23.603 }, 00:19:23.603 "claimed": false, 00:19:23.603 "zoned": false, 00:19:23.603 "supported_io_types": { 00:19:23.603 "read": true, 00:19:23.603 "write": true, 00:19:23.603 "unmap": true, 00:19:23.603 "flush": true, 00:19:23.603 "reset": true, 00:19:23.603 "nvme_admin": false, 00:19:23.603 "nvme_io": false, 00:19:23.603 "nvme_io_md": false, 00:19:23.603 "write_zeroes": true, 00:19:23.603 "zcopy": false, 00:19:23.603 "get_zone_info": false, 00:19:23.603 "zone_management": false, 00:19:23.603 "zone_append": false, 00:19:23.603 "compare": false, 00:19:23.603 "compare_and_write": false, 00:19:23.603 "abort": false, 00:19:23.603 "seek_hole": false, 00:19:23.603 "seek_data": false, 00:19:23.603 "copy": false, 00:19:23.603 "nvme_iov_md": false 00:19:23.603 }, 00:19:23.603 "memory_domains": [ 00:19:23.603 { 00:19:23.603 "dma_device_id": "system", 00:19:23.603 "dma_device_type": 1 00:19:23.603 }, 00:19:23.603 { 00:19:23.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.603 "dma_device_type": 2 00:19:23.603 }, 00:19:23.603 { 00:19:23.603 "dma_device_id": "system", 00:19:23.603 "dma_device_type": 1 00:19:23.603 }, 00:19:23.603 { 00:19:23.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.603 "dma_device_type": 2 00:19:23.603 }, 00:19:23.603 { 00:19:23.603 "dma_device_id": "system", 00:19:23.603 "dma_device_type": 1 00:19:23.603 }, 00:19:23.603 { 00:19:23.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.603 "dma_device_type": 2 00:19:23.603 } 00:19:23.603 ], 00:19:23.603 "driver_specific": { 00:19:23.603 "raid": { 00:19:23.604 "uuid": "8954f0de-105a-40bb-b551-80a53f211639", 00:19:23.604 "strip_size_kb": 64, 00:19:23.604 "state": "online", 00:19:23.604 "raid_level": "concat", 00:19:23.604 "superblock": true, 00:19:23.604 "num_base_bdevs": 3, 00:19:23.604 "num_base_bdevs_discovered": 3, 00:19:23.604 "num_base_bdevs_operational": 3, 00:19:23.604 "base_bdevs_list": [ 00:19:23.604 { 00:19:23.604 "name": "NewBaseBdev", 00:19:23.604 "uuid": "b0e159ca-e8d1-4071-a4ce-22e8c4ec5b4a", 00:19:23.604 "is_configured": true, 00:19:23.604 "data_offset": 2048, 00:19:23.604 "data_size": 63488 00:19:23.604 }, 00:19:23.604 { 00:19:23.604 "name": "BaseBdev2", 00:19:23.604 "uuid": "afce7b02-29fe-4c23-961b-95f9db6d5c44", 00:19:23.604 "is_configured": true, 00:19:23.604 "data_offset": 2048, 00:19:23.604 "data_size": 63488 00:19:23.604 }, 00:19:23.604 { 00:19:23.604 "name": "BaseBdev3", 00:19:23.604 "uuid": "3327088a-5bb9-46fc-a4bf-bd4ec655f7a5", 00:19:23.604 "is_configured": true, 00:19:23.604 "data_offset": 2048, 00:19:23.604 "data_size": 63488 00:19:23.604 } 00:19:23.604 ] 00:19:23.604 } 00:19:23.604 } 00:19:23.604 }' 00:19:23.604 04:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:23.604 04:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:23.604 BaseBdev2 00:19:23.604 BaseBdev3' 00:19:23.604 04:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:23.604 04:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:23.604 04:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:23.863 04:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:23.863 "name": "NewBaseBdev", 00:19:23.863 "aliases": [ 00:19:23.863 "b0e159ca-e8d1-4071-a4ce-22e8c4ec5b4a" 00:19:23.863 ], 00:19:23.863 "product_name": "Malloc disk", 00:19:23.863 "block_size": 512, 00:19:23.863 "num_blocks": 65536, 00:19:23.863 "uuid": "b0e159ca-e8d1-4071-a4ce-22e8c4ec5b4a", 00:19:23.863 "assigned_rate_limits": { 00:19:23.863 "rw_ios_per_sec": 0, 00:19:23.863 "rw_mbytes_per_sec": 0, 00:19:23.863 "r_mbytes_per_sec": 0, 00:19:23.863 "w_mbytes_per_sec": 0 00:19:23.863 }, 00:19:23.863 "claimed": true, 00:19:23.863 "claim_type": "exclusive_write", 00:19:23.863 "zoned": false, 00:19:23.863 "supported_io_types": { 00:19:23.863 "read": true, 00:19:23.863 "write": true, 00:19:23.863 "unmap": true, 00:19:23.863 "flush": true, 00:19:23.863 "reset": true, 00:19:23.863 "nvme_admin": false, 00:19:23.863 "nvme_io": false, 00:19:23.863 "nvme_io_md": false, 00:19:23.863 "write_zeroes": true, 00:19:23.863 "zcopy": true, 00:19:23.863 "get_zone_info": false, 00:19:23.863 "zone_management": false, 00:19:23.863 "zone_append": false, 00:19:23.863 "compare": false, 00:19:23.863 "compare_and_write": false, 00:19:23.863 "abort": true, 00:19:23.863 "seek_hole": false, 00:19:23.863 "seek_data": false, 00:19:23.863 "copy": true, 00:19:23.863 "nvme_iov_md": false 00:19:23.863 }, 00:19:23.863 "memory_domains": [ 00:19:23.863 { 00:19:23.863 "dma_device_id": "system", 00:19:23.863 "dma_device_type": 1 00:19:23.863 }, 00:19:23.863 { 00:19:23.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.863 "dma_device_type": 2 00:19:23.863 } 00:19:23.863 ], 00:19:23.863 "driver_specific": {} 00:19:23.863 }' 00:19:23.863 04:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:23.863 04:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:23.863 04:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:23.863 04:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:23.863 04:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:23.863 04:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:23.863 04:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:24.122 04:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:24.122 04:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:24.122 04:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:24.122 04:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:24.122 04:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:24.122 04:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:24.122 04:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:24.122 04:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:24.381 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:24.381 "name": "BaseBdev2", 00:19:24.381 "aliases": [ 00:19:24.381 "afce7b02-29fe-4c23-961b-95f9db6d5c44" 00:19:24.381 ], 00:19:24.381 "product_name": "Malloc disk", 00:19:24.381 "block_size": 512, 00:19:24.381 "num_blocks": 65536, 00:19:24.381 "uuid": "afce7b02-29fe-4c23-961b-95f9db6d5c44", 00:19:24.381 "assigned_rate_limits": { 00:19:24.381 "rw_ios_per_sec": 0, 00:19:24.381 "rw_mbytes_per_sec": 0, 00:19:24.381 "r_mbytes_per_sec": 0, 00:19:24.381 "w_mbytes_per_sec": 0 00:19:24.381 }, 00:19:24.381 "claimed": true, 00:19:24.381 "claim_type": "exclusive_write", 00:19:24.381 "zoned": false, 00:19:24.381 "supported_io_types": { 00:19:24.381 "read": true, 00:19:24.381 "write": true, 00:19:24.381 "unmap": true, 00:19:24.381 "flush": true, 00:19:24.381 "reset": true, 00:19:24.381 "nvme_admin": false, 00:19:24.381 "nvme_io": false, 00:19:24.381 "nvme_io_md": false, 00:19:24.381 "write_zeroes": true, 00:19:24.381 "zcopy": true, 00:19:24.381 "get_zone_info": false, 00:19:24.381 "zone_management": false, 00:19:24.382 "zone_append": false, 00:19:24.382 "compare": false, 00:19:24.382 "compare_and_write": false, 00:19:24.382 "abort": true, 00:19:24.382 "seek_hole": false, 00:19:24.382 "seek_data": false, 00:19:24.382 "copy": true, 00:19:24.382 "nvme_iov_md": false 00:19:24.382 }, 00:19:24.382 "memory_domains": [ 00:19:24.382 { 00:19:24.382 "dma_device_id": "system", 00:19:24.382 "dma_device_type": 1 00:19:24.382 }, 00:19:24.382 { 00:19:24.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.382 "dma_device_type": 2 00:19:24.382 } 00:19:24.382 ], 00:19:24.382 "driver_specific": {} 00:19:24.382 }' 00:19:24.382 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:24.382 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:24.382 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:24.382 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:24.382 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:24.640 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:24.641 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:24.641 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:24.641 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:24.641 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:24.641 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:24.641 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:24.641 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:24.641 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:24.641 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:24.899 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:24.899 "name": "BaseBdev3", 00:19:24.899 "aliases": [ 00:19:24.899 "3327088a-5bb9-46fc-a4bf-bd4ec655f7a5" 00:19:24.899 ], 00:19:24.899 "product_name": "Malloc disk", 00:19:24.899 "block_size": 512, 00:19:24.899 "num_blocks": 65536, 00:19:24.899 "uuid": "3327088a-5bb9-46fc-a4bf-bd4ec655f7a5", 00:19:24.899 "assigned_rate_limits": { 00:19:24.899 "rw_ios_per_sec": 0, 00:19:24.899 "rw_mbytes_per_sec": 0, 00:19:24.899 "r_mbytes_per_sec": 0, 00:19:24.899 "w_mbytes_per_sec": 0 00:19:24.899 }, 00:19:24.899 "claimed": true, 00:19:24.899 "claim_type": "exclusive_write", 00:19:24.899 "zoned": false, 00:19:24.899 "supported_io_types": { 00:19:24.899 "read": true, 00:19:24.899 "write": true, 00:19:24.899 "unmap": true, 00:19:24.899 "flush": true, 00:19:24.899 "reset": true, 00:19:24.899 "nvme_admin": false, 00:19:24.899 "nvme_io": false, 00:19:24.899 "nvme_io_md": false, 00:19:24.899 "write_zeroes": true, 00:19:24.899 "zcopy": true, 00:19:24.899 "get_zone_info": false, 00:19:24.899 "zone_management": false, 00:19:24.899 "zone_append": false, 00:19:24.899 "compare": false, 00:19:24.899 "compare_and_write": false, 00:19:24.899 "abort": true, 00:19:24.899 "seek_hole": false, 00:19:24.899 "seek_data": false, 00:19:24.899 "copy": true, 00:19:24.899 "nvme_iov_md": false 00:19:24.899 }, 00:19:24.899 "memory_domains": [ 00:19:24.899 { 00:19:24.899 "dma_device_id": "system", 00:19:24.899 "dma_device_type": 1 00:19:24.899 }, 00:19:24.899 { 00:19:24.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.899 "dma_device_type": 2 00:19:24.899 } 00:19:24.899 ], 00:19:24.899 "driver_specific": {} 00:19:24.899 }' 00:19:24.899 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:24.899 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:24.899 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:24.899 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:25.157 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:25.157 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:25.157 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:25.157 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:25.157 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:25.157 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:25.157 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:25.157 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:25.157 04:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:25.416 [2024-07-23 04:14:34.100650] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:25.416 [2024-07-23 04:14:34.100685] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:25.416 [2024-07-23 04:14:34.100769] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:25.416 [2024-07-23 04:14:34.100837] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:25.416 [2024-07-23 04:14:34.100861] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041780 name Existed_Raid, state offline 00:19:25.416 04:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2667857 00:19:25.416 04:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2667857 ']' 00:19:25.416 04:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2667857 00:19:25.416 04:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:19:25.416 04:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:25.416 04:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2667857 00:19:25.416 04:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:25.416 04:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:25.416 04:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2667857' 00:19:25.416 killing process with pid 2667857 00:19:25.416 04:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2667857 00:19:25.416 [2024-07-23 04:14:34.179013] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:25.416 04:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2667857 00:19:25.986 [2024-07-23 04:14:34.504927] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:27.893 04:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:19:27.893 00:19:27.893 real 0m29.467s 00:19:27.893 user 0m51.410s 00:19:27.893 sys 0m5.143s 00:19:27.893 04:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:27.893 04:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.893 ************************************ 00:19:27.893 END TEST raid_state_function_test_sb 00:19:27.893 ************************************ 00:19:27.893 04:14:36 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:27.893 04:14:36 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:19:27.893 04:14:36 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:27.893 04:14:36 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:27.893 04:14:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.893 ************************************ 00:19:27.893 START TEST raid_superblock_test 00:19:27.893 ************************************ 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 3 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2673463 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2673463 /var/tmp/spdk-raid.sock 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2673463 ']' 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:27.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.893 04:14:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.893 [2024-07-23 04:14:36.416324] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:27.893 [2024-07-23 04:14:36.416442] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673463 ] 00:19:27.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.893 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:27.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.893 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:27.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.893 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:27.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.893 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:27.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.894 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:27.894 [2024-07-23 04:14:36.642318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.153 [2024-07-23 04:14:36.924836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.722 [2024-07-23 04:14:37.264296] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.722 [2024-07-23 04:14:37.264336] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.722 04:14:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:28.722 04:14:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:19:28.722 04:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:19:28.722 04:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:28.722 04:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:19:28.722 04:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:19:28.722 04:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:28.722 04:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:28.722 04:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:28.722 04:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:28.722 04:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:28.981 malloc1 00:19:28.981 04:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:29.241 [2024-07-23 04:14:37.943020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:29.241 [2024-07-23 04:14:37.943085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.241 [2024-07-23 04:14:37.943115] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:19:29.241 [2024-07-23 04:14:37.943132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.241 [2024-07-23 04:14:37.945861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.241 [2024-07-23 04:14:37.945897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:29.241 pt1 00:19:29.241 04:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:29.241 04:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:29.241 04:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:19:29.241 04:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:19:29.241 04:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:29.241 04:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:29.241 04:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:29.241 04:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:29.241 04:14:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:29.500 malloc2 00:19:29.500 04:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:29.759 [2024-07-23 04:14:38.445995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:29.759 [2024-07-23 04:14:38.446045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.759 [2024-07-23 04:14:38.446073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:19:29.759 [2024-07-23 04:14:38.446088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.759 [2024-07-23 04:14:38.448789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.759 [2024-07-23 04:14:38.448827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:29.759 pt2 00:19:29.759 04:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:29.759 04:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:29.759 04:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:19:29.759 04:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:19:29.759 04:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:29.759 04:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:29.759 04:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:29.759 04:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:29.759 04:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:30.020 malloc3 00:19:30.020 04:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:30.281 [2024-07-23 04:14:38.957713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:30.281 [2024-07-23 04:14:38.957775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.281 [2024-07-23 04:14:38.957809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:19:30.281 [2024-07-23 04:14:38.957824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.281 [2024-07-23 04:14:38.960531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.281 [2024-07-23 04:14:38.960565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:30.281 pt3 00:19:30.281 04:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:30.281 04:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:30.281 04:14:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:19:30.539 [2024-07-23 04:14:39.182358] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:30.539 [2024-07-23 04:14:39.184626] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:30.539 [2024-07-23 04:14:39.184707] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:30.539 [2024-07-23 04:14:39.184938] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041480 00:19:30.539 [2024-07-23 04:14:39.184963] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:30.539 [2024-07-23 04:14:39.185305] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:19:30.539 [2024-07-23 04:14:39.185555] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041480 00:19:30.539 [2024-07-23 04:14:39.185571] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041480 00:19:30.539 [2024-07-23 04:14:39.185772] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:30.539 04:14:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:30.539 04:14:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:30.539 04:14:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:30.539 04:14:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:30.539 04:14:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:30.539 04:14:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:30.539 04:14:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:30.539 04:14:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:30.539 04:14:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:30.539 04:14:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:30.539 04:14:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.539 04:14:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.799 04:14:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:30.799 "name": "raid_bdev1", 00:19:30.799 "uuid": "8362f2d8-f837-4d56-99a6-c000473c7a21", 00:19:30.799 "strip_size_kb": 64, 00:19:30.799 "state": "online", 00:19:30.799 "raid_level": "concat", 00:19:30.799 "superblock": true, 00:19:30.799 "num_base_bdevs": 3, 00:19:30.799 "num_base_bdevs_discovered": 3, 00:19:30.799 "num_base_bdevs_operational": 3, 00:19:30.799 "base_bdevs_list": [ 00:19:30.799 { 00:19:30.799 "name": "pt1", 00:19:30.799 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:30.799 "is_configured": true, 00:19:30.799 "data_offset": 2048, 00:19:30.799 "data_size": 63488 00:19:30.799 }, 00:19:30.799 { 00:19:30.799 "name": "pt2", 00:19:30.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:30.799 "is_configured": true, 00:19:30.799 "data_offset": 2048, 00:19:30.799 "data_size": 63488 00:19:30.799 }, 00:19:30.799 { 00:19:30.799 "name": "pt3", 00:19:30.799 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:30.799 "is_configured": true, 00:19:30.799 "data_offset": 2048, 00:19:30.799 "data_size": 63488 00:19:30.799 } 00:19:30.799 ] 00:19:30.799 }' 00:19:30.799 04:14:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:30.799 04:14:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.367 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:19:31.367 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:31.367 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:31.367 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:31.367 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:31.367 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:31.367 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:31.367 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:31.626 [2024-07-23 04:14:40.217490] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:31.626 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:31.626 "name": "raid_bdev1", 00:19:31.626 "aliases": [ 00:19:31.626 "8362f2d8-f837-4d56-99a6-c000473c7a21" 00:19:31.626 ], 00:19:31.626 "product_name": "Raid Volume", 00:19:31.626 "block_size": 512, 00:19:31.626 "num_blocks": 190464, 00:19:31.626 "uuid": "8362f2d8-f837-4d56-99a6-c000473c7a21", 00:19:31.626 "assigned_rate_limits": { 00:19:31.626 "rw_ios_per_sec": 0, 00:19:31.626 "rw_mbytes_per_sec": 0, 00:19:31.626 "r_mbytes_per_sec": 0, 00:19:31.626 "w_mbytes_per_sec": 0 00:19:31.626 }, 00:19:31.626 "claimed": false, 00:19:31.626 "zoned": false, 00:19:31.626 "supported_io_types": { 00:19:31.626 "read": true, 00:19:31.626 "write": true, 00:19:31.626 "unmap": true, 00:19:31.626 "flush": true, 00:19:31.626 "reset": true, 00:19:31.626 "nvme_admin": false, 00:19:31.626 "nvme_io": false, 00:19:31.626 "nvme_io_md": false, 00:19:31.626 "write_zeroes": true, 00:19:31.626 "zcopy": false, 00:19:31.626 "get_zone_info": false, 00:19:31.626 "zone_management": false, 00:19:31.626 "zone_append": false, 00:19:31.626 "compare": false, 00:19:31.627 "compare_and_write": false, 00:19:31.627 "abort": false, 00:19:31.627 "seek_hole": false, 00:19:31.627 "seek_data": false, 00:19:31.627 "copy": false, 00:19:31.627 "nvme_iov_md": false 00:19:31.627 }, 00:19:31.627 "memory_domains": [ 00:19:31.627 { 00:19:31.627 "dma_device_id": "system", 00:19:31.627 "dma_device_type": 1 00:19:31.627 }, 00:19:31.627 { 00:19:31.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.627 "dma_device_type": 2 00:19:31.627 }, 00:19:31.627 { 00:19:31.627 "dma_device_id": "system", 00:19:31.627 "dma_device_type": 1 00:19:31.627 }, 00:19:31.627 { 00:19:31.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.627 "dma_device_type": 2 00:19:31.627 }, 00:19:31.627 { 00:19:31.627 "dma_device_id": "system", 00:19:31.627 "dma_device_type": 1 00:19:31.627 }, 00:19:31.627 { 00:19:31.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.627 "dma_device_type": 2 00:19:31.627 } 00:19:31.627 ], 00:19:31.627 "driver_specific": { 00:19:31.627 "raid": { 00:19:31.627 "uuid": "8362f2d8-f837-4d56-99a6-c000473c7a21", 00:19:31.627 "strip_size_kb": 64, 00:19:31.627 "state": "online", 00:19:31.627 "raid_level": "concat", 00:19:31.627 "superblock": true, 00:19:31.627 "num_base_bdevs": 3, 00:19:31.627 "num_base_bdevs_discovered": 3, 00:19:31.627 "num_base_bdevs_operational": 3, 00:19:31.627 "base_bdevs_list": [ 00:19:31.627 { 00:19:31.627 "name": "pt1", 00:19:31.627 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:31.627 "is_configured": true, 00:19:31.627 "data_offset": 2048, 00:19:31.627 "data_size": 63488 00:19:31.627 }, 00:19:31.627 { 00:19:31.627 "name": "pt2", 00:19:31.627 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:31.627 "is_configured": true, 00:19:31.627 "data_offset": 2048, 00:19:31.627 "data_size": 63488 00:19:31.627 }, 00:19:31.627 { 00:19:31.627 "name": "pt3", 00:19:31.627 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:31.627 "is_configured": true, 00:19:31.627 "data_offset": 2048, 00:19:31.627 "data_size": 63488 00:19:31.627 } 00:19:31.627 ] 00:19:31.627 } 00:19:31.627 } 00:19:31.627 }' 00:19:31.627 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:31.627 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:31.627 pt2 00:19:31.627 pt3' 00:19:31.627 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:31.627 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:31.627 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:31.886 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:31.886 "name": "pt1", 00:19:31.886 "aliases": [ 00:19:31.886 "00000000-0000-0000-0000-000000000001" 00:19:31.886 ], 00:19:31.886 "product_name": "passthru", 00:19:31.886 "block_size": 512, 00:19:31.886 "num_blocks": 65536, 00:19:31.886 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:31.886 "assigned_rate_limits": { 00:19:31.886 "rw_ios_per_sec": 0, 00:19:31.886 "rw_mbytes_per_sec": 0, 00:19:31.886 "r_mbytes_per_sec": 0, 00:19:31.886 "w_mbytes_per_sec": 0 00:19:31.886 }, 00:19:31.886 "claimed": true, 00:19:31.886 "claim_type": "exclusive_write", 00:19:31.886 "zoned": false, 00:19:31.886 "supported_io_types": { 00:19:31.886 "read": true, 00:19:31.886 "write": true, 00:19:31.886 "unmap": true, 00:19:31.886 "flush": true, 00:19:31.886 "reset": true, 00:19:31.886 "nvme_admin": false, 00:19:31.886 "nvme_io": false, 00:19:31.887 "nvme_io_md": false, 00:19:31.887 "write_zeroes": true, 00:19:31.887 "zcopy": true, 00:19:31.887 "get_zone_info": false, 00:19:31.887 "zone_management": false, 00:19:31.887 "zone_append": false, 00:19:31.887 "compare": false, 00:19:31.887 "compare_and_write": false, 00:19:31.887 "abort": true, 00:19:31.887 "seek_hole": false, 00:19:31.887 "seek_data": false, 00:19:31.887 "copy": true, 00:19:31.887 "nvme_iov_md": false 00:19:31.887 }, 00:19:31.887 "memory_domains": [ 00:19:31.887 { 00:19:31.887 "dma_device_id": "system", 00:19:31.887 "dma_device_type": 1 00:19:31.887 }, 00:19:31.887 { 00:19:31.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.887 "dma_device_type": 2 00:19:31.887 } 00:19:31.887 ], 00:19:31.887 "driver_specific": { 00:19:31.887 "passthru": { 00:19:31.887 "name": "pt1", 00:19:31.887 "base_bdev_name": "malloc1" 00:19:31.887 } 00:19:31.887 } 00:19:31.887 }' 00:19:31.887 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:31.887 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:31.887 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:31.887 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:31.887 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:31.887 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:31.887 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:31.887 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:31.887 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:31.887 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:32.146 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:32.146 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:32.146 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:32.146 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:32.146 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:32.146 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:32.146 "name": "pt2", 00:19:32.146 "aliases": [ 00:19:32.146 "00000000-0000-0000-0000-000000000002" 00:19:32.146 ], 00:19:32.146 "product_name": "passthru", 00:19:32.146 "block_size": 512, 00:19:32.146 "num_blocks": 65536, 00:19:32.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:32.146 "assigned_rate_limits": { 00:19:32.146 "rw_ios_per_sec": 0, 00:19:32.146 "rw_mbytes_per_sec": 0, 00:19:32.146 "r_mbytes_per_sec": 0, 00:19:32.146 "w_mbytes_per_sec": 0 00:19:32.146 }, 00:19:32.146 "claimed": true, 00:19:32.146 "claim_type": "exclusive_write", 00:19:32.146 "zoned": false, 00:19:32.146 "supported_io_types": { 00:19:32.146 "read": true, 00:19:32.146 "write": true, 00:19:32.146 "unmap": true, 00:19:32.146 "flush": true, 00:19:32.146 "reset": true, 00:19:32.146 "nvme_admin": false, 00:19:32.146 "nvme_io": false, 00:19:32.146 "nvme_io_md": false, 00:19:32.146 "write_zeroes": true, 00:19:32.146 "zcopy": true, 00:19:32.146 "get_zone_info": false, 00:19:32.146 "zone_management": false, 00:19:32.146 "zone_append": false, 00:19:32.146 "compare": false, 00:19:32.146 "compare_and_write": false, 00:19:32.146 "abort": true, 00:19:32.146 "seek_hole": false, 00:19:32.146 "seek_data": false, 00:19:32.146 "copy": true, 00:19:32.146 "nvme_iov_md": false 00:19:32.146 }, 00:19:32.146 "memory_domains": [ 00:19:32.146 { 00:19:32.146 "dma_device_id": "system", 00:19:32.146 "dma_device_type": 1 00:19:32.146 }, 00:19:32.146 { 00:19:32.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.146 "dma_device_type": 2 00:19:32.146 } 00:19:32.146 ], 00:19:32.146 "driver_specific": { 00:19:32.146 "passthru": { 00:19:32.146 "name": "pt2", 00:19:32.146 "base_bdev_name": "malloc2" 00:19:32.146 } 00:19:32.146 } 00:19:32.146 }' 00:19:32.405 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:32.405 04:14:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:32.405 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:32.405 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:32.405 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:32.405 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:32.405 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:32.405 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:32.405 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:32.405 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:32.405 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:32.664 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:32.664 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:32.665 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:32.665 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:32.665 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:32.665 "name": "pt3", 00:19:32.665 "aliases": [ 00:19:32.665 "00000000-0000-0000-0000-000000000003" 00:19:32.665 ], 00:19:32.665 "product_name": "passthru", 00:19:32.665 "block_size": 512, 00:19:32.665 "num_blocks": 65536, 00:19:32.665 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:32.665 "assigned_rate_limits": { 00:19:32.665 "rw_ios_per_sec": 0, 00:19:32.665 "rw_mbytes_per_sec": 0, 00:19:32.665 "r_mbytes_per_sec": 0, 00:19:32.665 "w_mbytes_per_sec": 0 00:19:32.665 }, 00:19:32.665 "claimed": true, 00:19:32.665 "claim_type": "exclusive_write", 00:19:32.665 "zoned": false, 00:19:32.665 "supported_io_types": { 00:19:32.665 "read": true, 00:19:32.665 "write": true, 00:19:32.665 "unmap": true, 00:19:32.665 "flush": true, 00:19:32.665 "reset": true, 00:19:32.665 "nvme_admin": false, 00:19:32.665 "nvme_io": false, 00:19:32.665 "nvme_io_md": false, 00:19:32.665 "write_zeroes": true, 00:19:32.665 "zcopy": true, 00:19:32.665 "get_zone_info": false, 00:19:32.665 "zone_management": false, 00:19:32.665 "zone_append": false, 00:19:32.665 "compare": false, 00:19:32.665 "compare_and_write": false, 00:19:32.665 "abort": true, 00:19:32.665 "seek_hole": false, 00:19:32.665 "seek_data": false, 00:19:32.665 "copy": true, 00:19:32.665 "nvme_iov_md": false 00:19:32.665 }, 00:19:32.665 "memory_domains": [ 00:19:32.665 { 00:19:32.665 "dma_device_id": "system", 00:19:32.665 "dma_device_type": 1 00:19:32.665 }, 00:19:32.665 { 00:19:32.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.665 "dma_device_type": 2 00:19:32.665 } 00:19:32.665 ], 00:19:32.665 "driver_specific": { 00:19:32.665 "passthru": { 00:19:32.665 "name": "pt3", 00:19:32.665 "base_bdev_name": "malloc3" 00:19:32.665 } 00:19:32.665 } 00:19:32.665 }' 00:19:32.665 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:32.665 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:32.665 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:32.665 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:32.923 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:32.923 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:32.923 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:32.923 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:32.923 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:32.923 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:32.923 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:32.923 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:32.923 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:32.923 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:19:33.219 [2024-07-23 04:14:41.829862] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:33.219 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=8362f2d8-f837-4d56-99a6-c000473c7a21 00:19:33.219 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 8362f2d8-f837-4d56-99a6-c000473c7a21 ']' 00:19:33.219 04:14:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:33.501 [2024-07-23 04:14:41.989899] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:33.501 [2024-07-23 04:14:41.989937] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:33.501 [2024-07-23 04:14:41.990036] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:33.501 [2024-07-23 04:14:41.990114] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:33.501 [2024-07-23 04:14:41.990132] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041480 name raid_bdev1, state offline 00:19:33.501 04:14:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.501 04:14:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:19:33.501 04:14:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:19:33.501 04:14:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:19:33.501 04:14:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:33.501 04:14:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:33.760 04:14:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:33.760 04:14:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:33.760 04:14:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:33.760 04:14:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:34.328 04:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:34.328 04:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:34.587 04:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:19:34.587 04:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:34.587 04:14:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:19:34.587 04:14:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:34.587 04:14:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:34.587 04:14:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:34.587 04:14:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:34.587 04:14:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:34.587 04:14:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:34.587 04:14:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:34.587 04:14:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:34.587 04:14:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:19:34.587 04:14:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:34.847 [2024-07-23 04:14:43.493891] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:34.847 [2024-07-23 04:14:43.496255] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:34.847 [2024-07-23 04:14:43.496321] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:34.847 [2024-07-23 04:14:43.496383] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:34.847 [2024-07-23 04:14:43.496440] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:34.847 [2024-07-23 04:14:43.496468] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:34.847 [2024-07-23 04:14:43.496494] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:34.847 [2024-07-23 04:14:43.496509] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state configuring 00:19:34.847 request: 00:19:34.847 { 00:19:34.847 "name": "raid_bdev1", 00:19:34.847 "raid_level": "concat", 00:19:34.847 "base_bdevs": [ 00:19:34.847 "malloc1", 00:19:34.847 "malloc2", 00:19:34.847 "malloc3" 00:19:34.847 ], 00:19:34.847 "strip_size_kb": 64, 00:19:34.847 "superblock": false, 00:19:34.847 "method": "bdev_raid_create", 00:19:34.847 "req_id": 1 00:19:34.847 } 00:19:34.847 Got JSON-RPC error response 00:19:34.847 response: 00:19:34.847 { 00:19:34.847 "code": -17, 00:19:34.847 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:34.847 } 00:19:34.847 04:14:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:19:34.847 04:14:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:34.847 04:14:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:34.847 04:14:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:34.847 04:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.847 04:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:19:35.106 04:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:19:35.106 04:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:19:35.106 04:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:35.365 [2024-07-23 04:14:43.935016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:35.365 [2024-07-23 04:14:43.935092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.365 [2024-07-23 04:14:43.935121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:19:35.365 [2024-07-23 04:14:43.935136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.365 [2024-07-23 04:14:43.937987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.365 [2024-07-23 04:14:43.938023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:35.365 [2024-07-23 04:14:43.938128] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:35.365 [2024-07-23 04:14:43.938218] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:35.365 pt1 00:19:35.365 04:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:19:35.365 04:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:35.365 04:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:35.365 04:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:35.365 04:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:35.365 04:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:35.365 04:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:35.365 04:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:35.365 04:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:35.365 04:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:35.365 04:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.365 04:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.624 04:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:35.624 "name": "raid_bdev1", 00:19:35.624 "uuid": "8362f2d8-f837-4d56-99a6-c000473c7a21", 00:19:35.624 "strip_size_kb": 64, 00:19:35.624 "state": "configuring", 00:19:35.624 "raid_level": "concat", 00:19:35.624 "superblock": true, 00:19:35.624 "num_base_bdevs": 3, 00:19:35.624 "num_base_bdevs_discovered": 1, 00:19:35.624 "num_base_bdevs_operational": 3, 00:19:35.624 "base_bdevs_list": [ 00:19:35.624 { 00:19:35.624 "name": "pt1", 00:19:35.624 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:35.624 "is_configured": true, 00:19:35.624 "data_offset": 2048, 00:19:35.624 "data_size": 63488 00:19:35.624 }, 00:19:35.624 { 00:19:35.624 "name": null, 00:19:35.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:35.624 "is_configured": false, 00:19:35.624 "data_offset": 2048, 00:19:35.624 "data_size": 63488 00:19:35.624 }, 00:19:35.624 { 00:19:35.624 "name": null, 00:19:35.624 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:35.624 "is_configured": false, 00:19:35.624 "data_offset": 2048, 00:19:35.624 "data_size": 63488 00:19:35.624 } 00:19:35.624 ] 00:19:35.624 }' 00:19:35.624 04:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:35.624 04:14:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.192 04:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:19:36.192 04:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:36.192 [2024-07-23 04:14:44.969778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:36.192 [2024-07-23 04:14:44.969856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.192 [2024-07-23 04:14:44.969885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:19:36.192 [2024-07-23 04:14:44.969901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.192 [2024-07-23 04:14:44.970497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.192 [2024-07-23 04:14:44.970522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:36.192 [2024-07-23 04:14:44.970619] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:36.192 [2024-07-23 04:14:44.970647] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:36.192 pt2 00:19:36.451 04:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:36.451 [2024-07-23 04:14:45.198471] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:36.451 04:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:19:36.451 04:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:36.451 04:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:36.451 04:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:36.451 04:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:36.451 04:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:36.451 04:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:36.451 04:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:36.451 04:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:36.451 04:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:36.451 04:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.451 04:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.711 04:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:36.711 "name": "raid_bdev1", 00:19:36.711 "uuid": "8362f2d8-f837-4d56-99a6-c000473c7a21", 00:19:36.711 "strip_size_kb": 64, 00:19:36.711 "state": "configuring", 00:19:36.711 "raid_level": "concat", 00:19:36.711 "superblock": true, 00:19:36.711 "num_base_bdevs": 3, 00:19:36.711 "num_base_bdevs_discovered": 1, 00:19:36.711 "num_base_bdevs_operational": 3, 00:19:36.711 "base_bdevs_list": [ 00:19:36.711 { 00:19:36.711 "name": "pt1", 00:19:36.711 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:36.711 "is_configured": true, 00:19:36.711 "data_offset": 2048, 00:19:36.711 "data_size": 63488 00:19:36.711 }, 00:19:36.711 { 00:19:36.711 "name": null, 00:19:36.711 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:36.711 "is_configured": false, 00:19:36.711 "data_offset": 2048, 00:19:36.711 "data_size": 63488 00:19:36.711 }, 00:19:36.711 { 00:19:36.711 "name": null, 00:19:36.711 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:36.711 "is_configured": false, 00:19:36.711 "data_offset": 2048, 00:19:36.711 "data_size": 63488 00:19:36.711 } 00:19:36.711 ] 00:19:36.711 }' 00:19:36.711 04:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:36.711 04:14:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.279 04:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:19:37.279 04:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:37.279 04:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:37.538 [2024-07-23 04:14:46.237258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:37.538 [2024-07-23 04:14:46.237329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.538 [2024-07-23 04:14:46.237354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:19:37.538 [2024-07-23 04:14:46.237372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.538 [2024-07-23 04:14:46.237942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.538 [2024-07-23 04:14:46.237970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:37.538 [2024-07-23 04:14:46.238058] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:37.538 [2024-07-23 04:14:46.238091] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:37.538 pt2 00:19:37.538 04:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:19:37.538 04:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:37.538 04:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:37.798 [2024-07-23 04:14:46.465838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:37.798 [2024-07-23 04:14:46.465898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.798 [2024-07-23 04:14:46.465921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042f80 00:19:37.798 [2024-07-23 04:14:46.465938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.798 [2024-07-23 04:14:46.466524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.798 [2024-07-23 04:14:46.466553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:37.798 [2024-07-23 04:14:46.466635] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:37.798 [2024-07-23 04:14:46.466668] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:37.798 [2024-07-23 04:14:46.466839] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042680 00:19:37.798 [2024-07-23 04:14:46.466856] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:37.798 [2024-07-23 04:14:46.467161] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:19:37.798 [2024-07-23 04:14:46.467407] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042680 00:19:37.798 [2024-07-23 04:14:46.467421] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042680 00:19:37.798 [2024-07-23 04:14:46.467597] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.798 pt3 00:19:37.798 04:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:19:37.798 04:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:37.798 04:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:37.798 04:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:37.798 04:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:37.798 04:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:37.798 04:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:37.798 04:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:37.798 04:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:37.798 04:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:37.798 04:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:37.798 04:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:37.798 04:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.798 04:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.057 04:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:38.057 "name": "raid_bdev1", 00:19:38.057 "uuid": "8362f2d8-f837-4d56-99a6-c000473c7a21", 00:19:38.057 "strip_size_kb": 64, 00:19:38.057 "state": "online", 00:19:38.057 "raid_level": "concat", 00:19:38.057 "superblock": true, 00:19:38.057 "num_base_bdevs": 3, 00:19:38.057 "num_base_bdevs_discovered": 3, 00:19:38.057 "num_base_bdevs_operational": 3, 00:19:38.057 "base_bdevs_list": [ 00:19:38.057 { 00:19:38.057 "name": "pt1", 00:19:38.057 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:38.057 "is_configured": true, 00:19:38.057 "data_offset": 2048, 00:19:38.057 "data_size": 63488 00:19:38.057 }, 00:19:38.057 { 00:19:38.057 "name": "pt2", 00:19:38.057 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:38.057 "is_configured": true, 00:19:38.057 "data_offset": 2048, 00:19:38.057 "data_size": 63488 00:19:38.057 }, 00:19:38.057 { 00:19:38.057 "name": "pt3", 00:19:38.057 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:38.057 "is_configured": true, 00:19:38.057 "data_offset": 2048, 00:19:38.057 "data_size": 63488 00:19:38.057 } 00:19:38.057 ] 00:19:38.057 }' 00:19:38.057 04:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:38.057 04:14:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.625 04:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:19:38.625 04:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:38.625 04:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:38.625 04:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:38.625 04:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:38.625 04:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:38.625 04:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:38.625 04:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:38.884 [2024-07-23 04:14:47.492984] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:38.884 04:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:38.884 "name": "raid_bdev1", 00:19:38.884 "aliases": [ 00:19:38.884 "8362f2d8-f837-4d56-99a6-c000473c7a21" 00:19:38.884 ], 00:19:38.884 "product_name": "Raid Volume", 00:19:38.884 "block_size": 512, 00:19:38.884 "num_blocks": 190464, 00:19:38.884 "uuid": "8362f2d8-f837-4d56-99a6-c000473c7a21", 00:19:38.884 "assigned_rate_limits": { 00:19:38.884 "rw_ios_per_sec": 0, 00:19:38.884 "rw_mbytes_per_sec": 0, 00:19:38.884 "r_mbytes_per_sec": 0, 00:19:38.884 "w_mbytes_per_sec": 0 00:19:38.884 }, 00:19:38.884 "claimed": false, 00:19:38.884 "zoned": false, 00:19:38.884 "supported_io_types": { 00:19:38.884 "read": true, 00:19:38.884 "write": true, 00:19:38.884 "unmap": true, 00:19:38.884 "flush": true, 00:19:38.884 "reset": true, 00:19:38.884 "nvme_admin": false, 00:19:38.884 "nvme_io": false, 00:19:38.884 "nvme_io_md": false, 00:19:38.884 "write_zeroes": true, 00:19:38.884 "zcopy": false, 00:19:38.884 "get_zone_info": false, 00:19:38.884 "zone_management": false, 00:19:38.884 "zone_append": false, 00:19:38.884 "compare": false, 00:19:38.884 "compare_and_write": false, 00:19:38.884 "abort": false, 00:19:38.884 "seek_hole": false, 00:19:38.884 "seek_data": false, 00:19:38.884 "copy": false, 00:19:38.884 "nvme_iov_md": false 00:19:38.884 }, 00:19:38.884 "memory_domains": [ 00:19:38.884 { 00:19:38.884 "dma_device_id": "system", 00:19:38.884 "dma_device_type": 1 00:19:38.884 }, 00:19:38.884 { 00:19:38.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.884 "dma_device_type": 2 00:19:38.884 }, 00:19:38.884 { 00:19:38.885 "dma_device_id": "system", 00:19:38.885 "dma_device_type": 1 00:19:38.885 }, 00:19:38.885 { 00:19:38.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.885 "dma_device_type": 2 00:19:38.885 }, 00:19:38.885 { 00:19:38.885 "dma_device_id": "system", 00:19:38.885 "dma_device_type": 1 00:19:38.885 }, 00:19:38.885 { 00:19:38.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.885 "dma_device_type": 2 00:19:38.885 } 00:19:38.885 ], 00:19:38.885 "driver_specific": { 00:19:38.885 "raid": { 00:19:38.885 "uuid": "8362f2d8-f837-4d56-99a6-c000473c7a21", 00:19:38.885 "strip_size_kb": 64, 00:19:38.885 "state": "online", 00:19:38.885 "raid_level": "concat", 00:19:38.885 "superblock": true, 00:19:38.885 "num_base_bdevs": 3, 00:19:38.885 "num_base_bdevs_discovered": 3, 00:19:38.885 "num_base_bdevs_operational": 3, 00:19:38.885 "base_bdevs_list": [ 00:19:38.885 { 00:19:38.885 "name": "pt1", 00:19:38.885 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:38.885 "is_configured": true, 00:19:38.885 "data_offset": 2048, 00:19:38.885 "data_size": 63488 00:19:38.885 }, 00:19:38.885 { 00:19:38.885 "name": "pt2", 00:19:38.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:38.885 "is_configured": true, 00:19:38.885 "data_offset": 2048, 00:19:38.885 "data_size": 63488 00:19:38.885 }, 00:19:38.885 { 00:19:38.885 "name": "pt3", 00:19:38.885 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:38.885 "is_configured": true, 00:19:38.885 "data_offset": 2048, 00:19:38.885 "data_size": 63488 00:19:38.885 } 00:19:38.885 ] 00:19:38.885 } 00:19:38.885 } 00:19:38.885 }' 00:19:38.885 04:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:38.885 04:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:38.885 pt2 00:19:38.885 pt3' 00:19:38.885 04:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:38.885 04:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:38.885 04:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:39.144 04:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:39.144 "name": "pt1", 00:19:39.144 "aliases": [ 00:19:39.144 "00000000-0000-0000-0000-000000000001" 00:19:39.144 ], 00:19:39.144 "product_name": "passthru", 00:19:39.144 "block_size": 512, 00:19:39.144 "num_blocks": 65536, 00:19:39.144 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:39.144 "assigned_rate_limits": { 00:19:39.144 "rw_ios_per_sec": 0, 00:19:39.144 "rw_mbytes_per_sec": 0, 00:19:39.144 "r_mbytes_per_sec": 0, 00:19:39.144 "w_mbytes_per_sec": 0 00:19:39.144 }, 00:19:39.144 "claimed": true, 00:19:39.144 "claim_type": "exclusive_write", 00:19:39.144 "zoned": false, 00:19:39.144 "supported_io_types": { 00:19:39.144 "read": true, 00:19:39.144 "write": true, 00:19:39.144 "unmap": true, 00:19:39.144 "flush": true, 00:19:39.144 "reset": true, 00:19:39.144 "nvme_admin": false, 00:19:39.144 "nvme_io": false, 00:19:39.144 "nvme_io_md": false, 00:19:39.144 "write_zeroes": true, 00:19:39.144 "zcopy": true, 00:19:39.144 "get_zone_info": false, 00:19:39.144 "zone_management": false, 00:19:39.144 "zone_append": false, 00:19:39.144 "compare": false, 00:19:39.144 "compare_and_write": false, 00:19:39.144 "abort": true, 00:19:39.144 "seek_hole": false, 00:19:39.144 "seek_data": false, 00:19:39.144 "copy": true, 00:19:39.144 "nvme_iov_md": false 00:19:39.144 }, 00:19:39.144 "memory_domains": [ 00:19:39.144 { 00:19:39.144 "dma_device_id": "system", 00:19:39.144 "dma_device_type": 1 00:19:39.144 }, 00:19:39.144 { 00:19:39.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.144 "dma_device_type": 2 00:19:39.144 } 00:19:39.144 ], 00:19:39.144 "driver_specific": { 00:19:39.144 "passthru": { 00:19:39.144 "name": "pt1", 00:19:39.144 "base_bdev_name": "malloc1" 00:19:39.144 } 00:19:39.144 } 00:19:39.144 }' 00:19:39.144 04:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:39.144 04:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:39.144 04:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:39.144 04:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:39.144 04:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:39.403 04:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:39.403 04:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:39.403 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:39.403 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:39.403 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:39.403 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:39.403 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:39.403 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:39.403 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:39.403 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:39.662 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:39.662 "name": "pt2", 00:19:39.662 "aliases": [ 00:19:39.662 "00000000-0000-0000-0000-000000000002" 00:19:39.662 ], 00:19:39.662 "product_name": "passthru", 00:19:39.662 "block_size": 512, 00:19:39.662 "num_blocks": 65536, 00:19:39.662 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:39.662 "assigned_rate_limits": { 00:19:39.662 "rw_ios_per_sec": 0, 00:19:39.662 "rw_mbytes_per_sec": 0, 00:19:39.662 "r_mbytes_per_sec": 0, 00:19:39.662 "w_mbytes_per_sec": 0 00:19:39.662 }, 00:19:39.662 "claimed": true, 00:19:39.662 "claim_type": "exclusive_write", 00:19:39.662 "zoned": false, 00:19:39.662 "supported_io_types": { 00:19:39.662 "read": true, 00:19:39.662 "write": true, 00:19:39.662 "unmap": true, 00:19:39.662 "flush": true, 00:19:39.662 "reset": true, 00:19:39.662 "nvme_admin": false, 00:19:39.662 "nvme_io": false, 00:19:39.662 "nvme_io_md": false, 00:19:39.662 "write_zeroes": true, 00:19:39.662 "zcopy": true, 00:19:39.662 "get_zone_info": false, 00:19:39.662 "zone_management": false, 00:19:39.662 "zone_append": false, 00:19:39.662 "compare": false, 00:19:39.662 "compare_and_write": false, 00:19:39.662 "abort": true, 00:19:39.662 "seek_hole": false, 00:19:39.662 "seek_data": false, 00:19:39.662 "copy": true, 00:19:39.662 "nvme_iov_md": false 00:19:39.662 }, 00:19:39.662 "memory_domains": [ 00:19:39.662 { 00:19:39.662 "dma_device_id": "system", 00:19:39.662 "dma_device_type": 1 00:19:39.662 }, 00:19:39.662 { 00:19:39.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.662 "dma_device_type": 2 00:19:39.662 } 00:19:39.662 ], 00:19:39.662 "driver_specific": { 00:19:39.662 "passthru": { 00:19:39.662 "name": "pt2", 00:19:39.662 "base_bdev_name": "malloc2" 00:19:39.662 } 00:19:39.662 } 00:19:39.662 }' 00:19:39.662 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:39.662 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:39.920 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:39.920 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:39.920 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:39.920 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:39.920 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:39.920 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:39.920 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:39.920 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:39.920 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:40.179 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:40.179 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:40.179 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:40.179 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:40.179 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:40.179 "name": "pt3", 00:19:40.179 "aliases": [ 00:19:40.179 "00000000-0000-0000-0000-000000000003" 00:19:40.179 ], 00:19:40.179 "product_name": "passthru", 00:19:40.179 "block_size": 512, 00:19:40.179 "num_blocks": 65536, 00:19:40.179 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:40.179 "assigned_rate_limits": { 00:19:40.179 "rw_ios_per_sec": 0, 00:19:40.179 "rw_mbytes_per_sec": 0, 00:19:40.179 "r_mbytes_per_sec": 0, 00:19:40.179 "w_mbytes_per_sec": 0 00:19:40.179 }, 00:19:40.179 "claimed": true, 00:19:40.179 "claim_type": "exclusive_write", 00:19:40.179 "zoned": false, 00:19:40.179 "supported_io_types": { 00:19:40.179 "read": true, 00:19:40.179 "write": true, 00:19:40.179 "unmap": true, 00:19:40.179 "flush": true, 00:19:40.179 "reset": true, 00:19:40.179 "nvme_admin": false, 00:19:40.179 "nvme_io": false, 00:19:40.179 "nvme_io_md": false, 00:19:40.179 "write_zeroes": true, 00:19:40.179 "zcopy": true, 00:19:40.179 "get_zone_info": false, 00:19:40.179 "zone_management": false, 00:19:40.179 "zone_append": false, 00:19:40.179 "compare": false, 00:19:40.179 "compare_and_write": false, 00:19:40.179 "abort": true, 00:19:40.179 "seek_hole": false, 00:19:40.179 "seek_data": false, 00:19:40.179 "copy": true, 00:19:40.179 "nvme_iov_md": false 00:19:40.179 }, 00:19:40.179 "memory_domains": [ 00:19:40.179 { 00:19:40.179 "dma_device_id": "system", 00:19:40.179 "dma_device_type": 1 00:19:40.179 }, 00:19:40.179 { 00:19:40.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.179 "dma_device_type": 2 00:19:40.179 } 00:19:40.179 ], 00:19:40.179 "driver_specific": { 00:19:40.179 "passthru": { 00:19:40.179 "name": "pt3", 00:19:40.179 "base_bdev_name": "malloc3" 00:19:40.179 } 00:19:40.179 } 00:19:40.179 }' 00:19:40.179 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:40.437 04:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:40.437 04:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:40.437 04:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:40.437 04:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:40.437 04:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:40.438 04:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:40.438 04:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:40.438 04:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:40.438 04:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:40.696 04:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:40.696 04:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:40.696 04:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:40.696 04:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:19:40.954 [2024-07-23 04:14:49.486439] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:40.954 04:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 8362f2d8-f837-4d56-99a6-c000473c7a21 '!=' 8362f2d8-f837-4d56-99a6-c000473c7a21 ']' 00:19:40.954 04:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:19:40.954 04:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:40.954 04:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:40.954 04:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2673463 00:19:40.954 04:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2673463 ']' 00:19:40.954 04:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2673463 00:19:40.954 04:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:19:40.954 04:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.954 04:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2673463 00:19:40.954 04:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:40.954 04:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:40.954 04:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2673463' 00:19:40.954 killing process with pid 2673463 00:19:40.954 04:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2673463 00:19:40.954 [2024-07-23 04:14:49.568418] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:40.954 [2024-07-23 04:14:49.568529] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:40.955 04:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2673463 00:19:40.955 [2024-07-23 04:14:49.568604] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:40.955 [2024-07-23 04:14:49.568625] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state offline 00:19:41.212 [2024-07-23 04:14:49.884705] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:43.115 04:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:19:43.115 00:19:43.115 real 0m15.342s 00:19:43.115 user 0m25.651s 00:19:43.115 sys 0m2.611s 00:19:43.115 04:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:43.115 04:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.115 ************************************ 00:19:43.115 END TEST raid_superblock_test 00:19:43.115 ************************************ 00:19:43.115 04:14:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:43.115 04:14:51 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:19:43.115 04:14:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:43.115 04:14:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:43.115 04:14:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:43.115 ************************************ 00:19:43.115 START TEST raid_read_error_test 00:19:43.115 ************************************ 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 read 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.dRXuuR6mYs 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2676392 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2676392 /var/tmp/spdk-raid.sock 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2676392 ']' 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:43.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:43.115 04:14:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.115 [2024-07-23 04:14:51.853954] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:43.115 [2024-07-23 04:14:51.854077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2676392 ] 00:19:43.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.374 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:43.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.374 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:43.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.375 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:43.375 [2024-07-23 04:14:52.077999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.634 [2024-07-23 04:14:52.361691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.202 [2024-07-23 04:14:52.698916] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:44.202 [2024-07-23 04:14:52.698971] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:44.202 04:14:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:44.202 04:14:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:19:44.202 04:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:44.202 04:14:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:44.460 BaseBdev1_malloc 00:19:44.460 04:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:44.719 true 00:19:44.719 04:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:44.977 [2024-07-23 04:14:53.582977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:44.977 [2024-07-23 04:14:53.583038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.977 [2024-07-23 04:14:53.583065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:19:44.977 [2024-07-23 04:14:53.583088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.977 [2024-07-23 04:14:53.585826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.977 [2024-07-23 04:14:53.585862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:44.977 BaseBdev1 00:19:44.977 04:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:44.977 04:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:45.236 BaseBdev2_malloc 00:19:45.236 04:14:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:45.494 true 00:19:45.495 04:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:45.754 [2024-07-23 04:14:54.311174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:45.754 [2024-07-23 04:14:54.311232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.754 [2024-07-23 04:14:54.311258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:19:45.754 [2024-07-23 04:14:54.311280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.754 [2024-07-23 04:14:54.314015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.754 [2024-07-23 04:14:54.314053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:45.754 BaseBdev2 00:19:45.754 04:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:45.754 04:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:46.013 BaseBdev3_malloc 00:19:46.013 04:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:46.273 true 00:19:46.273 04:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:46.273 [2024-07-23 04:14:55.035836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:46.273 [2024-07-23 04:14:55.035900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.273 [2024-07-23 04:14:55.035927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:19:46.273 [2024-07-23 04:14:55.035945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.273 [2024-07-23 04:14:55.038731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.273 [2024-07-23 04:14:55.038768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:46.273 BaseBdev3 00:19:46.273 04:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:19:46.533 [2024-07-23 04:14:55.264505] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:46.533 [2024-07-23 04:14:55.266865] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:46.533 [2024-07-23 04:14:55.266956] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:46.533 [2024-07-23 04:14:55.267256] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041d80 00:19:46.533 [2024-07-23 04:14:55.267276] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:46.533 [2024-07-23 04:14:55.267605] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:19:46.533 [2024-07-23 04:14:55.267865] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041d80 00:19:46.533 [2024-07-23 04:14:55.267886] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041d80 00:19:46.533 [2024-07-23 04:14:55.268086] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.533 04:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:46.533 04:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:46.533 04:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:46.533 04:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:46.533 04:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:46.533 04:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:46.533 04:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:46.533 04:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:46.533 04:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:46.533 04:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:46.533 04:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.533 04:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.822 04:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:46.822 "name": "raid_bdev1", 00:19:46.822 "uuid": "fc29e7e8-0d70-4708-a170-cab524853845", 00:19:46.822 "strip_size_kb": 64, 00:19:46.822 "state": "online", 00:19:46.822 "raid_level": "concat", 00:19:46.822 "superblock": true, 00:19:46.822 "num_base_bdevs": 3, 00:19:46.822 "num_base_bdevs_discovered": 3, 00:19:46.822 "num_base_bdevs_operational": 3, 00:19:46.822 "base_bdevs_list": [ 00:19:46.822 { 00:19:46.822 "name": "BaseBdev1", 00:19:46.822 "uuid": "745d3a6a-3dfe-5be9-be97-ce3f9e48aa4d", 00:19:46.822 "is_configured": true, 00:19:46.822 "data_offset": 2048, 00:19:46.822 "data_size": 63488 00:19:46.822 }, 00:19:46.822 { 00:19:46.822 "name": "BaseBdev2", 00:19:46.822 "uuid": "7694f935-9104-5b90-a7c4-65f5e9a4eb55", 00:19:46.822 "is_configured": true, 00:19:46.822 "data_offset": 2048, 00:19:46.822 "data_size": 63488 00:19:46.822 }, 00:19:46.822 { 00:19:46.822 "name": "BaseBdev3", 00:19:46.822 "uuid": "2e0859b4-148e-58cd-b878-79a2675588cd", 00:19:46.822 "is_configured": true, 00:19:46.822 "data_offset": 2048, 00:19:46.822 "data_size": 63488 00:19:46.822 } 00:19:46.822 ] 00:19:46.822 }' 00:19:46.822 04:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:46.822 04:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.391 04:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:47.391 04:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:47.650 [2024-07-23 04:14:56.192846] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:19:48.588 04:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:48.588 04:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:48.588 04:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:19:48.588 04:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:19:48.588 04:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:48.588 04:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:48.588 04:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:48.588 04:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:48.588 04:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:48.588 04:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:48.588 04:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:48.588 04:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:48.588 04:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:48.588 04:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:48.588 04:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.588 04:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.848 04:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:48.848 "name": "raid_bdev1", 00:19:48.848 "uuid": "fc29e7e8-0d70-4708-a170-cab524853845", 00:19:48.848 "strip_size_kb": 64, 00:19:48.848 "state": "online", 00:19:48.848 "raid_level": "concat", 00:19:48.848 "superblock": true, 00:19:48.848 "num_base_bdevs": 3, 00:19:48.848 "num_base_bdevs_discovered": 3, 00:19:48.848 "num_base_bdevs_operational": 3, 00:19:48.848 "base_bdevs_list": [ 00:19:48.848 { 00:19:48.848 "name": "BaseBdev1", 00:19:48.848 "uuid": "745d3a6a-3dfe-5be9-be97-ce3f9e48aa4d", 00:19:48.848 "is_configured": true, 00:19:48.848 "data_offset": 2048, 00:19:48.848 "data_size": 63488 00:19:48.848 }, 00:19:48.848 { 00:19:48.848 "name": "BaseBdev2", 00:19:48.848 "uuid": "7694f935-9104-5b90-a7c4-65f5e9a4eb55", 00:19:48.848 "is_configured": true, 00:19:48.848 "data_offset": 2048, 00:19:48.848 "data_size": 63488 00:19:48.848 }, 00:19:48.848 { 00:19:48.848 "name": "BaseBdev3", 00:19:48.848 "uuid": "2e0859b4-148e-58cd-b878-79a2675588cd", 00:19:48.848 "is_configured": true, 00:19:48.848 "data_offset": 2048, 00:19:48.848 "data_size": 63488 00:19:48.848 } 00:19:48.848 ] 00:19:48.848 }' 00:19:48.848 04:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:48.848 04:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.416 04:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:49.676 [2024-07-23 04:14:58.337488] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:49.676 [2024-07-23 04:14:58.337533] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:49.676 [2024-07-23 04:14:58.340836] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:49.676 [2024-07-23 04:14:58.340891] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.676 [2024-07-23 04:14:58.340942] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:49.676 [2024-07-23 04:14:58.340960] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041d80 name raid_bdev1, state offline 00:19:49.676 0 00:19:49.676 04:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2676392 00:19:49.676 04:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2676392 ']' 00:19:49.676 04:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2676392 00:19:49.676 04:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:19:49.676 04:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:49.676 04:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2676392 00:19:49.676 04:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:49.676 04:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:49.676 04:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2676392' 00:19:49.676 killing process with pid 2676392 00:19:49.676 04:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2676392 00:19:49.676 [2024-07-23 04:14:58.414160] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:49.676 04:14:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2676392 00:19:49.935 [2024-07-23 04:14:58.658369] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:51.841 04:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.dRXuuR6mYs 00:19:51.841 04:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:51.841 04:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:51.841 04:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:19:51.841 04:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:19:51.841 04:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:51.841 04:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:51.841 04:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:19:51.841 00:19:51.841 real 0m8.769s 00:19:51.841 user 0m12.335s 00:19:51.841 sys 0m1.366s 00:19:51.841 04:15:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:51.841 04:15:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.841 ************************************ 00:19:51.841 END TEST raid_read_error_test 00:19:51.841 ************************************ 00:19:51.841 04:15:00 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:51.841 04:15:00 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:19:51.841 04:15:00 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:51.841 04:15:00 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:51.841 04:15:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:51.841 ************************************ 00:19:51.841 START TEST raid_write_error_test 00:19:51.841 ************************************ 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 write 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.CCiSqxtA5B 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2677865 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2677865 /var/tmp/spdk-raid.sock 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2677865 ']' 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:51.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:51.841 04:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.100 [2024-07-23 04:15:00.702977] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:52.100 [2024-07-23 04:15:00.703103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2677865 ] 00:19:52.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.100 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:52.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.100 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:52.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.100 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:52.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.100 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:52.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.100 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:52.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.100 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:52.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.100 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:52.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.100 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:52.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.100 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:52.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.100 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:52.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.100 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:52.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.100 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:52.100 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.100 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:52.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.101 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:52.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.101 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:52.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.101 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:52.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.101 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:52.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.101 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:52.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.101 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:52.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.101 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:52.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.101 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:52.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.101 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:52.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.101 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:52.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.101 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:52.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.101 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:52.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.101 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:52.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.101 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:52.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.101 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:52.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.101 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:52.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.101 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:52.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.101 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:52.101 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.101 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:52.362 [2024-07-23 04:15:00.930261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.621 [2024-07-23 04:15:01.216606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.880 [2024-07-23 04:15:01.543755] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:52.880 [2024-07-23 04:15:01.543791] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:53.138 04:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:53.138 04:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:19:53.138 04:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:53.138 04:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:53.397 BaseBdev1_malloc 00:19:53.397 04:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:53.656 true 00:19:53.656 04:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:53.656 [2024-07-23 04:15:02.424323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:53.656 [2024-07-23 04:15:02.424385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.656 [2024-07-23 04:15:02.424412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:19:53.656 [2024-07-23 04:15:02.424434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.656 [2024-07-23 04:15:02.427245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.656 [2024-07-23 04:15:02.427285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:53.656 BaseBdev1 00:19:53.915 04:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:53.915 04:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:54.174 BaseBdev2_malloc 00:19:54.174 04:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:54.174 true 00:19:54.174 04:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:54.433 [2024-07-23 04:15:03.156622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:54.433 [2024-07-23 04:15:03.156679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.433 [2024-07-23 04:15:03.156704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:19:54.433 [2024-07-23 04:15:03.156726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.433 [2024-07-23 04:15:03.159490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.433 [2024-07-23 04:15:03.159527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:54.433 BaseBdev2 00:19:54.433 04:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:54.433 04:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:54.692 BaseBdev3_malloc 00:19:54.692 04:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:54.951 true 00:19:54.951 04:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:55.210 [2024-07-23 04:15:03.893719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:55.210 [2024-07-23 04:15:03.893778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.210 [2024-07-23 04:15:03.893806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:19:55.210 [2024-07-23 04:15:03.893824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.210 [2024-07-23 04:15:03.896638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.210 [2024-07-23 04:15:03.896676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:55.210 BaseBdev3 00:19:55.210 04:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:19:55.468 [2024-07-23 04:15:04.122380] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:55.468 [2024-07-23 04:15:04.124779] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:55.468 [2024-07-23 04:15:04.124870] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:55.468 [2024-07-23 04:15:04.125174] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041d80 00:19:55.468 [2024-07-23 04:15:04.125193] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:55.468 [2024-07-23 04:15:04.125531] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:19:55.468 [2024-07-23 04:15:04.125787] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041d80 00:19:55.468 [2024-07-23 04:15:04.125809] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041d80 00:19:55.468 [2024-07-23 04:15:04.126014] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.468 04:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:55.468 04:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:55.468 04:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:55.468 04:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:55.468 04:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:55.468 04:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:55.468 04:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:55.468 04:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:55.468 04:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:55.468 04:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:55.468 04:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.468 04:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.727 04:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:55.727 "name": "raid_bdev1", 00:19:55.727 "uuid": "6174f40b-da7a-49ca-a323-4cb62feb11da", 00:19:55.727 "strip_size_kb": 64, 00:19:55.727 "state": "online", 00:19:55.727 "raid_level": "concat", 00:19:55.727 "superblock": true, 00:19:55.727 "num_base_bdevs": 3, 00:19:55.727 "num_base_bdevs_discovered": 3, 00:19:55.727 "num_base_bdevs_operational": 3, 00:19:55.727 "base_bdevs_list": [ 00:19:55.727 { 00:19:55.727 "name": "BaseBdev1", 00:19:55.727 "uuid": "b2abcc86-49c0-5684-9e7c-ff55e286c23d", 00:19:55.727 "is_configured": true, 00:19:55.727 "data_offset": 2048, 00:19:55.727 "data_size": 63488 00:19:55.727 }, 00:19:55.727 { 00:19:55.727 "name": "BaseBdev2", 00:19:55.727 "uuid": "32831ab2-41f9-5dbb-a5af-2e1f9895ba4e", 00:19:55.727 "is_configured": true, 00:19:55.727 "data_offset": 2048, 00:19:55.727 "data_size": 63488 00:19:55.727 }, 00:19:55.727 { 00:19:55.727 "name": "BaseBdev3", 00:19:55.727 "uuid": "78c23ec1-0724-5018-848f-23fb1901c81e", 00:19:55.727 "is_configured": true, 00:19:55.727 "data_offset": 2048, 00:19:55.727 "data_size": 63488 00:19:55.727 } 00:19:55.727 ] 00:19:55.727 }' 00:19:55.727 04:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:55.727 04:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.294 04:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:56.294 04:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:56.294 [2024-07-23 04:15:05.039149] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:19:57.230 04:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:57.489 04:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:57.489 04:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:19:57.489 04:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:19:57.489 04:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:57.489 04:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:57.489 04:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:57.489 04:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:57.489 04:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:57.489 04:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:57.489 04:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:57.489 04:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:57.489 04:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:57.489 04:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:57.489 04:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.489 04:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.748 04:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:57.748 "name": "raid_bdev1", 00:19:57.748 "uuid": "6174f40b-da7a-49ca-a323-4cb62feb11da", 00:19:57.748 "strip_size_kb": 64, 00:19:57.748 "state": "online", 00:19:57.748 "raid_level": "concat", 00:19:57.748 "superblock": true, 00:19:57.748 "num_base_bdevs": 3, 00:19:57.748 "num_base_bdevs_discovered": 3, 00:19:57.748 "num_base_bdevs_operational": 3, 00:19:57.748 "base_bdevs_list": [ 00:19:57.748 { 00:19:57.748 "name": "BaseBdev1", 00:19:57.748 "uuid": "b2abcc86-49c0-5684-9e7c-ff55e286c23d", 00:19:57.748 "is_configured": true, 00:19:57.748 "data_offset": 2048, 00:19:57.748 "data_size": 63488 00:19:57.748 }, 00:19:57.748 { 00:19:57.748 "name": "BaseBdev2", 00:19:57.748 "uuid": "32831ab2-41f9-5dbb-a5af-2e1f9895ba4e", 00:19:57.748 "is_configured": true, 00:19:57.748 "data_offset": 2048, 00:19:57.748 "data_size": 63488 00:19:57.748 }, 00:19:57.748 { 00:19:57.748 "name": "BaseBdev3", 00:19:57.748 "uuid": "78c23ec1-0724-5018-848f-23fb1901c81e", 00:19:57.748 "is_configured": true, 00:19:57.748 "data_offset": 2048, 00:19:57.748 "data_size": 63488 00:19:57.748 } 00:19:57.748 ] 00:19:57.748 }' 00:19:57.748 04:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:57.748 04:15:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.316 04:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:58.575 [2024-07-23 04:15:07.195163] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:58.575 [2024-07-23 04:15:07.195202] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:58.575 [2024-07-23 04:15:07.198459] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:58.575 [2024-07-23 04:15:07.198511] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:58.575 [2024-07-23 04:15:07.198559] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:58.575 [2024-07-23 04:15:07.198575] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041d80 name raid_bdev1, state offline 00:19:58.575 0 00:19:58.575 04:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2677865 00:19:58.575 04:15:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2677865 ']' 00:19:58.575 04:15:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2677865 00:19:58.575 04:15:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:19:58.575 04:15:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:58.575 04:15:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2677865 00:19:58.575 04:15:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:58.575 04:15:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:58.575 04:15:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2677865' 00:19:58.575 killing process with pid 2677865 00:19:58.575 04:15:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2677865 00:19:58.575 [2024-07-23 04:15:07.269480] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:58.575 04:15:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2677865 00:19:58.834 [2024-07-23 04:15:07.508691] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:00.738 04:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.CCiSqxtA5B 00:20:00.738 04:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:00.738 04:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:00.738 04:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:20:00.738 04:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:20:00.738 04:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:00.738 04:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:00.738 04:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:20:00.738 00:20:00.738 real 0m8.758s 00:20:00.738 user 0m12.380s 00:20:00.738 sys 0m1.353s 00:20:00.738 04:15:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:00.738 04:15:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.738 ************************************ 00:20:00.738 END TEST raid_write_error_test 00:20:00.738 ************************************ 00:20:00.738 04:15:09 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:00.738 04:15:09 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:20:00.738 04:15:09 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:20:00.738 04:15:09 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:00.738 04:15:09 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:00.738 04:15:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:00.738 ************************************ 00:20:00.738 START TEST raid_state_function_test 00:20:00.738 ************************************ 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 false 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2680035 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2680035' 00:20:00.738 Process raid pid: 2680035 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2680035 /var/tmp/spdk-raid.sock 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2680035 ']' 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:00.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.738 04:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.024 [2024-07-23 04:15:09.538652] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:01.024 [2024-07-23 04:15:09.538765] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.024 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:01.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.024 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:01.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.024 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:01.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.024 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:01.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.024 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:01.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.024 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:01.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.024 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:01.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.024 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:01.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.024 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:01.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.024 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:01.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.024 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:01.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.024 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:01.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.024 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:01.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.024 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:01.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.025 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:01.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.025 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:01.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.025 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:01.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.025 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:01.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.025 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:01.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.025 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:01.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.025 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:01.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.025 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:01.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.025 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:01.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.025 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:01.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.025 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:01.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.025 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:01.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.025 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:01.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.025 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:01.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.025 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:01.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.025 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:01.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.025 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:01.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:01.025 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:01.025 [2024-07-23 04:15:09.767202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.592 [2024-07-23 04:15:10.073292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.850 [2024-07-23 04:15:10.403534] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:01.850 [2024-07-23 04:15:10.403571] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:01.850 04:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.850 04:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:20:01.850 04:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:02.109 [2024-07-23 04:15:10.780969] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:02.109 [2024-07-23 04:15:10.781029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:02.109 [2024-07-23 04:15:10.781044] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:02.109 [2024-07-23 04:15:10.781060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:02.109 [2024-07-23 04:15:10.781071] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:02.109 [2024-07-23 04:15:10.781087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:02.109 04:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:02.109 04:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:02.109 04:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:02.109 04:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:02.109 04:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:02.109 04:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:02.109 04:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:02.109 04:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:02.109 04:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:02.109 04:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:02.109 04:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.109 04:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.371 04:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:02.371 "name": "Existed_Raid", 00:20:02.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.371 "strip_size_kb": 0, 00:20:02.371 "state": "configuring", 00:20:02.371 "raid_level": "raid1", 00:20:02.371 "superblock": false, 00:20:02.371 "num_base_bdevs": 3, 00:20:02.371 "num_base_bdevs_discovered": 0, 00:20:02.371 "num_base_bdevs_operational": 3, 00:20:02.371 "base_bdevs_list": [ 00:20:02.371 { 00:20:02.371 "name": "BaseBdev1", 00:20:02.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.371 "is_configured": false, 00:20:02.371 "data_offset": 0, 00:20:02.371 "data_size": 0 00:20:02.371 }, 00:20:02.371 { 00:20:02.371 "name": "BaseBdev2", 00:20:02.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.371 "is_configured": false, 00:20:02.371 "data_offset": 0, 00:20:02.371 "data_size": 0 00:20:02.371 }, 00:20:02.371 { 00:20:02.371 "name": "BaseBdev3", 00:20:02.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.371 "is_configured": false, 00:20:02.371 "data_offset": 0, 00:20:02.371 "data_size": 0 00:20:02.371 } 00:20:02.371 ] 00:20:02.371 }' 00:20:02.371 04:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:02.371 04:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.937 04:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:03.196 [2024-07-23 04:15:11.771521] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:03.196 [2024-07-23 04:15:11.771569] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:20:03.196 04:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:03.454 [2024-07-23 04:15:11.992137] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:03.454 [2024-07-23 04:15:11.992201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:03.454 [2024-07-23 04:15:11.992216] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:03.454 [2024-07-23 04:15:11.992236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:03.454 [2024-07-23 04:15:11.992247] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:03.454 [2024-07-23 04:15:11.992263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:03.455 04:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:03.713 [2024-07-23 04:15:12.276814] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:03.713 BaseBdev1 00:20:03.713 04:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:03.713 04:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:03.713 04:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:03.713 04:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:03.713 04:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:03.713 04:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:03.713 04:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:03.972 04:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:03.972 [ 00:20:03.972 { 00:20:03.972 "name": "BaseBdev1", 00:20:03.972 "aliases": [ 00:20:03.972 "a4cf9935-f049-4267-bee3-057352d0afdd" 00:20:03.972 ], 00:20:03.972 "product_name": "Malloc disk", 00:20:03.972 "block_size": 512, 00:20:03.972 "num_blocks": 65536, 00:20:03.972 "uuid": "a4cf9935-f049-4267-bee3-057352d0afdd", 00:20:03.972 "assigned_rate_limits": { 00:20:03.972 "rw_ios_per_sec": 0, 00:20:03.972 "rw_mbytes_per_sec": 0, 00:20:03.972 "r_mbytes_per_sec": 0, 00:20:03.972 "w_mbytes_per_sec": 0 00:20:03.972 }, 00:20:03.972 "claimed": true, 00:20:03.972 "claim_type": "exclusive_write", 00:20:03.972 "zoned": false, 00:20:03.972 "supported_io_types": { 00:20:03.972 "read": true, 00:20:03.972 "write": true, 00:20:03.972 "unmap": true, 00:20:03.972 "flush": true, 00:20:03.972 "reset": true, 00:20:03.972 "nvme_admin": false, 00:20:03.972 "nvme_io": false, 00:20:03.972 "nvme_io_md": false, 00:20:03.972 "write_zeroes": true, 00:20:03.972 "zcopy": true, 00:20:03.972 "get_zone_info": false, 00:20:03.972 "zone_management": false, 00:20:03.972 "zone_append": false, 00:20:03.972 "compare": false, 00:20:03.972 "compare_and_write": false, 00:20:03.972 "abort": true, 00:20:03.972 "seek_hole": false, 00:20:03.972 "seek_data": false, 00:20:03.972 "copy": true, 00:20:03.972 "nvme_iov_md": false 00:20:03.972 }, 00:20:03.972 "memory_domains": [ 00:20:03.972 { 00:20:03.972 "dma_device_id": "system", 00:20:03.972 "dma_device_type": 1 00:20:03.972 }, 00:20:03.972 { 00:20:03.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.972 "dma_device_type": 2 00:20:03.972 } 00:20:03.972 ], 00:20:03.972 "driver_specific": {} 00:20:03.972 } 00:20:03.972 ] 00:20:03.972 04:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:03.972 04:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:03.972 04:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:03.972 04:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:03.972 04:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:03.972 04:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:04.231 04:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:04.231 04:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:04.231 04:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:04.231 04:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:04.231 04:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:04.231 04:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.231 04:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.231 04:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:04.231 "name": "Existed_Raid", 00:20:04.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.231 "strip_size_kb": 0, 00:20:04.231 "state": "configuring", 00:20:04.231 "raid_level": "raid1", 00:20:04.231 "superblock": false, 00:20:04.231 "num_base_bdevs": 3, 00:20:04.231 "num_base_bdevs_discovered": 1, 00:20:04.231 "num_base_bdevs_operational": 3, 00:20:04.231 "base_bdevs_list": [ 00:20:04.231 { 00:20:04.231 "name": "BaseBdev1", 00:20:04.231 "uuid": "a4cf9935-f049-4267-bee3-057352d0afdd", 00:20:04.231 "is_configured": true, 00:20:04.231 "data_offset": 0, 00:20:04.231 "data_size": 65536 00:20:04.231 }, 00:20:04.231 { 00:20:04.231 "name": "BaseBdev2", 00:20:04.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.231 "is_configured": false, 00:20:04.231 "data_offset": 0, 00:20:04.231 "data_size": 0 00:20:04.231 }, 00:20:04.231 { 00:20:04.231 "name": "BaseBdev3", 00:20:04.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.231 "is_configured": false, 00:20:04.231 "data_offset": 0, 00:20:04.231 "data_size": 0 00:20:04.231 } 00:20:04.231 ] 00:20:04.231 }' 00:20:04.231 04:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:04.231 04:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.798 04:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:05.057 [2024-07-23 04:15:13.752864] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:05.057 [2024-07-23 04:15:13.752925] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:20:05.057 04:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:05.316 [2024-07-23 04:15:13.981577] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:05.316 [2024-07-23 04:15:13.983905] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:05.316 [2024-07-23 04:15:13.983951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:05.316 [2024-07-23 04:15:13.983966] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:05.316 [2024-07-23 04:15:13.983982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:05.316 04:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:05.316 04:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:05.316 04:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:05.316 04:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:05.316 04:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:05.316 04:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:05.316 04:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:05.316 04:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:05.316 04:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:05.316 04:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:05.316 04:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:05.316 04:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:05.316 04:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.316 04:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.575 04:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:05.575 "name": "Existed_Raid", 00:20:05.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.575 "strip_size_kb": 0, 00:20:05.575 "state": "configuring", 00:20:05.575 "raid_level": "raid1", 00:20:05.575 "superblock": false, 00:20:05.575 "num_base_bdevs": 3, 00:20:05.575 "num_base_bdevs_discovered": 1, 00:20:05.575 "num_base_bdevs_operational": 3, 00:20:05.575 "base_bdevs_list": [ 00:20:05.575 { 00:20:05.575 "name": "BaseBdev1", 00:20:05.575 "uuid": "a4cf9935-f049-4267-bee3-057352d0afdd", 00:20:05.575 "is_configured": true, 00:20:05.575 "data_offset": 0, 00:20:05.575 "data_size": 65536 00:20:05.575 }, 00:20:05.575 { 00:20:05.575 "name": "BaseBdev2", 00:20:05.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.575 "is_configured": false, 00:20:05.575 "data_offset": 0, 00:20:05.575 "data_size": 0 00:20:05.575 }, 00:20:05.575 { 00:20:05.575 "name": "BaseBdev3", 00:20:05.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.575 "is_configured": false, 00:20:05.575 "data_offset": 0, 00:20:05.575 "data_size": 0 00:20:05.575 } 00:20:05.575 ] 00:20:05.575 }' 00:20:05.575 04:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:05.575 04:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.143 04:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:06.402 [2024-07-23 04:15:15.053905] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:06.402 BaseBdev2 00:20:06.402 04:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:06.402 04:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:06.402 04:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:06.402 04:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:06.402 04:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:06.402 04:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:06.402 04:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:06.661 04:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:06.921 [ 00:20:06.921 { 00:20:06.921 "name": "BaseBdev2", 00:20:06.921 "aliases": [ 00:20:06.921 "69e82653-ded7-4d32-9603-0ce05ae6cac6" 00:20:06.921 ], 00:20:06.921 "product_name": "Malloc disk", 00:20:06.921 "block_size": 512, 00:20:06.921 "num_blocks": 65536, 00:20:06.921 "uuid": "69e82653-ded7-4d32-9603-0ce05ae6cac6", 00:20:06.921 "assigned_rate_limits": { 00:20:06.921 "rw_ios_per_sec": 0, 00:20:06.921 "rw_mbytes_per_sec": 0, 00:20:06.921 "r_mbytes_per_sec": 0, 00:20:06.921 "w_mbytes_per_sec": 0 00:20:06.921 }, 00:20:06.921 "claimed": true, 00:20:06.921 "claim_type": "exclusive_write", 00:20:06.921 "zoned": false, 00:20:06.921 "supported_io_types": { 00:20:06.921 "read": true, 00:20:06.921 "write": true, 00:20:06.921 "unmap": true, 00:20:06.921 "flush": true, 00:20:06.921 "reset": true, 00:20:06.921 "nvme_admin": false, 00:20:06.921 "nvme_io": false, 00:20:06.921 "nvme_io_md": false, 00:20:06.921 "write_zeroes": true, 00:20:06.921 "zcopy": true, 00:20:06.921 "get_zone_info": false, 00:20:06.921 "zone_management": false, 00:20:06.921 "zone_append": false, 00:20:06.921 "compare": false, 00:20:06.921 "compare_and_write": false, 00:20:06.921 "abort": true, 00:20:06.921 "seek_hole": false, 00:20:06.921 "seek_data": false, 00:20:06.921 "copy": true, 00:20:06.921 "nvme_iov_md": false 00:20:06.921 }, 00:20:06.921 "memory_domains": [ 00:20:06.921 { 00:20:06.921 "dma_device_id": "system", 00:20:06.921 "dma_device_type": 1 00:20:06.921 }, 00:20:06.921 { 00:20:06.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.921 "dma_device_type": 2 00:20:06.921 } 00:20:06.921 ], 00:20:06.921 "driver_specific": {} 00:20:06.921 } 00:20:06.921 ] 00:20:06.921 04:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:06.921 04:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:06.921 04:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:06.921 04:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:06.921 04:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:06.921 04:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:06.921 04:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:06.921 04:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:06.921 04:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:06.921 04:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:06.921 04:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:06.921 04:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:06.921 04:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:06.921 04:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.921 04:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:07.181 04:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:07.181 "name": "Existed_Raid", 00:20:07.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.181 "strip_size_kb": 0, 00:20:07.181 "state": "configuring", 00:20:07.181 "raid_level": "raid1", 00:20:07.181 "superblock": false, 00:20:07.181 "num_base_bdevs": 3, 00:20:07.181 "num_base_bdevs_discovered": 2, 00:20:07.181 "num_base_bdevs_operational": 3, 00:20:07.181 "base_bdevs_list": [ 00:20:07.181 { 00:20:07.181 "name": "BaseBdev1", 00:20:07.181 "uuid": "a4cf9935-f049-4267-bee3-057352d0afdd", 00:20:07.181 "is_configured": true, 00:20:07.181 "data_offset": 0, 00:20:07.181 "data_size": 65536 00:20:07.181 }, 00:20:07.181 { 00:20:07.181 "name": "BaseBdev2", 00:20:07.181 "uuid": "69e82653-ded7-4d32-9603-0ce05ae6cac6", 00:20:07.181 "is_configured": true, 00:20:07.181 "data_offset": 0, 00:20:07.181 "data_size": 65536 00:20:07.181 }, 00:20:07.181 { 00:20:07.181 "name": "BaseBdev3", 00:20:07.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.181 "is_configured": false, 00:20:07.181 "data_offset": 0, 00:20:07.181 "data_size": 0 00:20:07.181 } 00:20:07.181 ] 00:20:07.181 }' 00:20:07.181 04:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:07.181 04:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.749 04:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:08.009 [2024-07-23 04:15:16.583678] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:08.009 [2024-07-23 04:15:16.583738] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:20:08.009 [2024-07-23 04:15:16.583758] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:08.009 [2024-07-23 04:15:16.584088] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:20:08.009 [2024-07-23 04:15:16.584374] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:20:08.009 [2024-07-23 04:15:16.584391] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:20:08.009 [2024-07-23 04:15:16.584694] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.009 BaseBdev3 00:20:08.009 04:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:08.009 04:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:08.009 04:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:08.009 04:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:08.009 04:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:08.009 04:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:08.009 04:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:08.269 04:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:08.269 [ 00:20:08.269 { 00:20:08.269 "name": "BaseBdev3", 00:20:08.269 "aliases": [ 00:20:08.269 "3f3e6541-179c-431a-8c64-4659c6a925c8" 00:20:08.269 ], 00:20:08.269 "product_name": "Malloc disk", 00:20:08.269 "block_size": 512, 00:20:08.269 "num_blocks": 65536, 00:20:08.269 "uuid": "3f3e6541-179c-431a-8c64-4659c6a925c8", 00:20:08.269 "assigned_rate_limits": { 00:20:08.269 "rw_ios_per_sec": 0, 00:20:08.269 "rw_mbytes_per_sec": 0, 00:20:08.269 "r_mbytes_per_sec": 0, 00:20:08.269 "w_mbytes_per_sec": 0 00:20:08.269 }, 00:20:08.269 "claimed": true, 00:20:08.269 "claim_type": "exclusive_write", 00:20:08.269 "zoned": false, 00:20:08.269 "supported_io_types": { 00:20:08.269 "read": true, 00:20:08.269 "write": true, 00:20:08.269 "unmap": true, 00:20:08.269 "flush": true, 00:20:08.269 "reset": true, 00:20:08.269 "nvme_admin": false, 00:20:08.269 "nvme_io": false, 00:20:08.269 "nvme_io_md": false, 00:20:08.269 "write_zeroes": true, 00:20:08.269 "zcopy": true, 00:20:08.269 "get_zone_info": false, 00:20:08.269 "zone_management": false, 00:20:08.269 "zone_append": false, 00:20:08.269 "compare": false, 00:20:08.269 "compare_and_write": false, 00:20:08.269 "abort": true, 00:20:08.269 "seek_hole": false, 00:20:08.269 "seek_data": false, 00:20:08.269 "copy": true, 00:20:08.269 "nvme_iov_md": false 00:20:08.269 }, 00:20:08.269 "memory_domains": [ 00:20:08.269 { 00:20:08.269 "dma_device_id": "system", 00:20:08.269 "dma_device_type": 1 00:20:08.269 }, 00:20:08.269 { 00:20:08.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.269 "dma_device_type": 2 00:20:08.269 } 00:20:08.269 ], 00:20:08.269 "driver_specific": {} 00:20:08.269 } 00:20:08.269 ] 00:20:08.269 04:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:08.269 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:08.269 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:08.269 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:08.269 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:08.528 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:08.528 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:08.528 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:08.528 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:08.528 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:08.528 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:08.528 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:08.528 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:08.528 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.528 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.528 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:08.528 "name": "Existed_Raid", 00:20:08.528 "uuid": "68c48a3c-0ccd-4f29-8a7d-5ede1e15dbf4", 00:20:08.528 "strip_size_kb": 0, 00:20:08.528 "state": "online", 00:20:08.528 "raid_level": "raid1", 00:20:08.528 "superblock": false, 00:20:08.528 "num_base_bdevs": 3, 00:20:08.528 "num_base_bdevs_discovered": 3, 00:20:08.528 "num_base_bdevs_operational": 3, 00:20:08.529 "base_bdevs_list": [ 00:20:08.529 { 00:20:08.529 "name": "BaseBdev1", 00:20:08.529 "uuid": "a4cf9935-f049-4267-bee3-057352d0afdd", 00:20:08.529 "is_configured": true, 00:20:08.529 "data_offset": 0, 00:20:08.529 "data_size": 65536 00:20:08.529 }, 00:20:08.529 { 00:20:08.529 "name": "BaseBdev2", 00:20:08.529 "uuid": "69e82653-ded7-4d32-9603-0ce05ae6cac6", 00:20:08.529 "is_configured": true, 00:20:08.529 "data_offset": 0, 00:20:08.529 "data_size": 65536 00:20:08.529 }, 00:20:08.529 { 00:20:08.529 "name": "BaseBdev3", 00:20:08.529 "uuid": "3f3e6541-179c-431a-8c64-4659c6a925c8", 00:20:08.529 "is_configured": true, 00:20:08.529 "data_offset": 0, 00:20:08.529 "data_size": 65536 00:20:08.529 } 00:20:08.529 ] 00:20:08.529 }' 00:20:08.529 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:08.529 04:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.098 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:09.098 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:09.098 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:09.098 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:09.098 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:09.098 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:09.098 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:09.098 04:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:09.357 [2024-07-23 04:15:18.076127] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:09.357 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:09.357 "name": "Existed_Raid", 00:20:09.357 "aliases": [ 00:20:09.357 "68c48a3c-0ccd-4f29-8a7d-5ede1e15dbf4" 00:20:09.357 ], 00:20:09.357 "product_name": "Raid Volume", 00:20:09.357 "block_size": 512, 00:20:09.357 "num_blocks": 65536, 00:20:09.357 "uuid": "68c48a3c-0ccd-4f29-8a7d-5ede1e15dbf4", 00:20:09.357 "assigned_rate_limits": { 00:20:09.357 "rw_ios_per_sec": 0, 00:20:09.357 "rw_mbytes_per_sec": 0, 00:20:09.357 "r_mbytes_per_sec": 0, 00:20:09.357 "w_mbytes_per_sec": 0 00:20:09.357 }, 00:20:09.357 "claimed": false, 00:20:09.357 "zoned": false, 00:20:09.357 "supported_io_types": { 00:20:09.357 "read": true, 00:20:09.357 "write": true, 00:20:09.357 "unmap": false, 00:20:09.357 "flush": false, 00:20:09.357 "reset": true, 00:20:09.357 "nvme_admin": false, 00:20:09.357 "nvme_io": false, 00:20:09.357 "nvme_io_md": false, 00:20:09.357 "write_zeroes": true, 00:20:09.357 "zcopy": false, 00:20:09.357 "get_zone_info": false, 00:20:09.357 "zone_management": false, 00:20:09.357 "zone_append": false, 00:20:09.357 "compare": false, 00:20:09.357 "compare_and_write": false, 00:20:09.357 "abort": false, 00:20:09.357 "seek_hole": false, 00:20:09.357 "seek_data": false, 00:20:09.357 "copy": false, 00:20:09.357 "nvme_iov_md": false 00:20:09.357 }, 00:20:09.357 "memory_domains": [ 00:20:09.357 { 00:20:09.357 "dma_device_id": "system", 00:20:09.357 "dma_device_type": 1 00:20:09.357 }, 00:20:09.357 { 00:20:09.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.357 "dma_device_type": 2 00:20:09.357 }, 00:20:09.357 { 00:20:09.357 "dma_device_id": "system", 00:20:09.357 "dma_device_type": 1 00:20:09.357 }, 00:20:09.357 { 00:20:09.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.357 "dma_device_type": 2 00:20:09.357 }, 00:20:09.357 { 00:20:09.357 "dma_device_id": "system", 00:20:09.357 "dma_device_type": 1 00:20:09.357 }, 00:20:09.357 { 00:20:09.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.357 "dma_device_type": 2 00:20:09.357 } 00:20:09.357 ], 00:20:09.357 "driver_specific": { 00:20:09.357 "raid": { 00:20:09.357 "uuid": "68c48a3c-0ccd-4f29-8a7d-5ede1e15dbf4", 00:20:09.357 "strip_size_kb": 0, 00:20:09.357 "state": "online", 00:20:09.357 "raid_level": "raid1", 00:20:09.357 "superblock": false, 00:20:09.357 "num_base_bdevs": 3, 00:20:09.357 "num_base_bdevs_discovered": 3, 00:20:09.357 "num_base_bdevs_operational": 3, 00:20:09.357 "base_bdevs_list": [ 00:20:09.357 { 00:20:09.357 "name": "BaseBdev1", 00:20:09.357 "uuid": "a4cf9935-f049-4267-bee3-057352d0afdd", 00:20:09.357 "is_configured": true, 00:20:09.357 "data_offset": 0, 00:20:09.357 "data_size": 65536 00:20:09.357 }, 00:20:09.357 { 00:20:09.357 "name": "BaseBdev2", 00:20:09.357 "uuid": "69e82653-ded7-4d32-9603-0ce05ae6cac6", 00:20:09.357 "is_configured": true, 00:20:09.357 "data_offset": 0, 00:20:09.357 "data_size": 65536 00:20:09.357 }, 00:20:09.357 { 00:20:09.357 "name": "BaseBdev3", 00:20:09.357 "uuid": "3f3e6541-179c-431a-8c64-4659c6a925c8", 00:20:09.357 "is_configured": true, 00:20:09.357 "data_offset": 0, 00:20:09.357 "data_size": 65536 00:20:09.357 } 00:20:09.357 ] 00:20:09.357 } 00:20:09.357 } 00:20:09.357 }' 00:20:09.357 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:09.616 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:09.616 BaseBdev2 00:20:09.616 BaseBdev3' 00:20:09.616 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:09.616 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:09.616 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:09.616 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:09.616 "name": "BaseBdev1", 00:20:09.616 "aliases": [ 00:20:09.616 "a4cf9935-f049-4267-bee3-057352d0afdd" 00:20:09.616 ], 00:20:09.616 "product_name": "Malloc disk", 00:20:09.616 "block_size": 512, 00:20:09.616 "num_blocks": 65536, 00:20:09.616 "uuid": "a4cf9935-f049-4267-bee3-057352d0afdd", 00:20:09.616 "assigned_rate_limits": { 00:20:09.616 "rw_ios_per_sec": 0, 00:20:09.617 "rw_mbytes_per_sec": 0, 00:20:09.617 "r_mbytes_per_sec": 0, 00:20:09.617 "w_mbytes_per_sec": 0 00:20:09.617 }, 00:20:09.617 "claimed": true, 00:20:09.617 "claim_type": "exclusive_write", 00:20:09.617 "zoned": false, 00:20:09.617 "supported_io_types": { 00:20:09.617 "read": true, 00:20:09.617 "write": true, 00:20:09.617 "unmap": true, 00:20:09.617 "flush": true, 00:20:09.617 "reset": true, 00:20:09.617 "nvme_admin": false, 00:20:09.617 "nvme_io": false, 00:20:09.617 "nvme_io_md": false, 00:20:09.617 "write_zeroes": true, 00:20:09.617 "zcopy": true, 00:20:09.617 "get_zone_info": false, 00:20:09.617 "zone_management": false, 00:20:09.617 "zone_append": false, 00:20:09.617 "compare": false, 00:20:09.617 "compare_and_write": false, 00:20:09.617 "abort": true, 00:20:09.617 "seek_hole": false, 00:20:09.617 "seek_data": false, 00:20:09.617 "copy": true, 00:20:09.617 "nvme_iov_md": false 00:20:09.617 }, 00:20:09.617 "memory_domains": [ 00:20:09.617 { 00:20:09.617 "dma_device_id": "system", 00:20:09.617 "dma_device_type": 1 00:20:09.617 }, 00:20:09.617 { 00:20:09.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.617 "dma_device_type": 2 00:20:09.617 } 00:20:09.617 ], 00:20:09.617 "driver_specific": {} 00:20:09.617 }' 00:20:09.617 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:09.876 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:09.876 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:09.876 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:09.876 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:09.876 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:09.876 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:09.876 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:09.876 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:09.876 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:09.876 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:10.135 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:10.135 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:10.135 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:10.135 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:10.394 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:10.394 "name": "BaseBdev2", 00:20:10.394 "aliases": [ 00:20:10.394 "69e82653-ded7-4d32-9603-0ce05ae6cac6" 00:20:10.394 ], 00:20:10.394 "product_name": "Malloc disk", 00:20:10.394 "block_size": 512, 00:20:10.394 "num_blocks": 65536, 00:20:10.394 "uuid": "69e82653-ded7-4d32-9603-0ce05ae6cac6", 00:20:10.394 "assigned_rate_limits": { 00:20:10.394 "rw_ios_per_sec": 0, 00:20:10.394 "rw_mbytes_per_sec": 0, 00:20:10.394 "r_mbytes_per_sec": 0, 00:20:10.394 "w_mbytes_per_sec": 0 00:20:10.394 }, 00:20:10.394 "claimed": true, 00:20:10.394 "claim_type": "exclusive_write", 00:20:10.394 "zoned": false, 00:20:10.394 "supported_io_types": { 00:20:10.394 "read": true, 00:20:10.394 "write": true, 00:20:10.394 "unmap": true, 00:20:10.394 "flush": true, 00:20:10.394 "reset": true, 00:20:10.394 "nvme_admin": false, 00:20:10.394 "nvme_io": false, 00:20:10.394 "nvme_io_md": false, 00:20:10.394 "write_zeroes": true, 00:20:10.394 "zcopy": true, 00:20:10.394 "get_zone_info": false, 00:20:10.394 "zone_management": false, 00:20:10.394 "zone_append": false, 00:20:10.394 "compare": false, 00:20:10.394 "compare_and_write": false, 00:20:10.394 "abort": true, 00:20:10.394 "seek_hole": false, 00:20:10.394 "seek_data": false, 00:20:10.394 "copy": true, 00:20:10.394 "nvme_iov_md": false 00:20:10.394 }, 00:20:10.394 "memory_domains": [ 00:20:10.394 { 00:20:10.394 "dma_device_id": "system", 00:20:10.394 "dma_device_type": 1 00:20:10.394 }, 00:20:10.394 { 00:20:10.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.394 "dma_device_type": 2 00:20:10.394 } 00:20:10.394 ], 00:20:10.394 "driver_specific": {} 00:20:10.394 }' 00:20:10.394 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:10.394 04:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:10.394 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:10.394 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:10.394 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:10.394 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:10.394 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:10.394 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:10.654 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:10.654 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:10.654 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:10.654 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:10.654 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:10.654 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:10.654 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:10.913 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:10.913 "name": "BaseBdev3", 00:20:10.913 "aliases": [ 00:20:10.913 "3f3e6541-179c-431a-8c64-4659c6a925c8" 00:20:10.913 ], 00:20:10.913 "product_name": "Malloc disk", 00:20:10.913 "block_size": 512, 00:20:10.913 "num_blocks": 65536, 00:20:10.913 "uuid": "3f3e6541-179c-431a-8c64-4659c6a925c8", 00:20:10.913 "assigned_rate_limits": { 00:20:10.913 "rw_ios_per_sec": 0, 00:20:10.913 "rw_mbytes_per_sec": 0, 00:20:10.913 "r_mbytes_per_sec": 0, 00:20:10.913 "w_mbytes_per_sec": 0 00:20:10.913 }, 00:20:10.913 "claimed": true, 00:20:10.913 "claim_type": "exclusive_write", 00:20:10.913 "zoned": false, 00:20:10.913 "supported_io_types": { 00:20:10.913 "read": true, 00:20:10.913 "write": true, 00:20:10.913 "unmap": true, 00:20:10.913 "flush": true, 00:20:10.913 "reset": true, 00:20:10.913 "nvme_admin": false, 00:20:10.913 "nvme_io": false, 00:20:10.913 "nvme_io_md": false, 00:20:10.913 "write_zeroes": true, 00:20:10.913 "zcopy": true, 00:20:10.913 "get_zone_info": false, 00:20:10.913 "zone_management": false, 00:20:10.913 "zone_append": false, 00:20:10.913 "compare": false, 00:20:10.913 "compare_and_write": false, 00:20:10.913 "abort": true, 00:20:10.913 "seek_hole": false, 00:20:10.913 "seek_data": false, 00:20:10.913 "copy": true, 00:20:10.913 "nvme_iov_md": false 00:20:10.913 }, 00:20:10.913 "memory_domains": [ 00:20:10.913 { 00:20:10.913 "dma_device_id": "system", 00:20:10.913 "dma_device_type": 1 00:20:10.913 }, 00:20:10.913 { 00:20:10.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.913 "dma_device_type": 2 00:20:10.913 } 00:20:10.913 ], 00:20:10.913 "driver_specific": {} 00:20:10.913 }' 00:20:10.913 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:10.913 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:10.913 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:10.913 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:10.913 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:10.913 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:10.913 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:11.172 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:11.172 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:11.172 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.172 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.172 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:11.172 04:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:11.432 [2024-07-23 04:15:20.057232] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:11.432 04:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:11.432 04:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:20:11.432 04:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:11.432 04:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:11.432 04:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:11.432 04:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:11.432 04:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:11.432 04:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:11.432 04:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:11.432 04:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:11.432 04:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:11.432 04:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:11.432 04:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:11.432 04:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:11.432 04:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:11.432 04:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.432 04:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.691 04:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:11.691 "name": "Existed_Raid", 00:20:11.691 "uuid": "68c48a3c-0ccd-4f29-8a7d-5ede1e15dbf4", 00:20:11.691 "strip_size_kb": 0, 00:20:11.691 "state": "online", 00:20:11.691 "raid_level": "raid1", 00:20:11.691 "superblock": false, 00:20:11.691 "num_base_bdevs": 3, 00:20:11.691 "num_base_bdevs_discovered": 2, 00:20:11.691 "num_base_bdevs_operational": 2, 00:20:11.691 "base_bdevs_list": [ 00:20:11.691 { 00:20:11.691 "name": null, 00:20:11.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.691 "is_configured": false, 00:20:11.691 "data_offset": 0, 00:20:11.691 "data_size": 65536 00:20:11.691 }, 00:20:11.691 { 00:20:11.691 "name": "BaseBdev2", 00:20:11.691 "uuid": "69e82653-ded7-4d32-9603-0ce05ae6cac6", 00:20:11.691 "is_configured": true, 00:20:11.691 "data_offset": 0, 00:20:11.691 "data_size": 65536 00:20:11.691 }, 00:20:11.691 { 00:20:11.691 "name": "BaseBdev3", 00:20:11.691 "uuid": "3f3e6541-179c-431a-8c64-4659c6a925c8", 00:20:11.691 "is_configured": true, 00:20:11.691 "data_offset": 0, 00:20:11.691 "data_size": 65536 00:20:11.691 } 00:20:11.691 ] 00:20:11.691 }' 00:20:11.691 04:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:11.691 04:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.259 04:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:12.259 04:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:12.259 04:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.259 04:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:12.518 04:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:12.518 04:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:12.518 04:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:12.777 [2024-07-23 04:15:21.356633] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:12.777 04:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:12.777 04:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:12.777 04:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.777 04:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:13.036 04:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:13.036 04:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:13.036 04:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:13.295 [2024-07-23 04:15:21.952754] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:13.295 [2024-07-23 04:15:21.952866] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:13.555 [2024-07-23 04:15:22.088553] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:13.555 [2024-07-23 04:15:22.088607] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:13.555 [2024-07-23 04:15:22.088626] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:20:13.555 04:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:13.555 04:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:13.555 04:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.555 04:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:13.555 04:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:13.555 04:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:13.555 04:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:13.555 04:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:13.555 04:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:13.555 04:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:13.814 BaseBdev2 00:20:14.102 04:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:14.102 04:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:14.103 04:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:14.103 04:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:14.103 04:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:14.103 04:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:14.103 04:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:14.103 04:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:14.361 [ 00:20:14.361 { 00:20:14.361 "name": "BaseBdev2", 00:20:14.361 "aliases": [ 00:20:14.361 "53968e50-f447-4e96-a540-9114363d6db3" 00:20:14.361 ], 00:20:14.361 "product_name": "Malloc disk", 00:20:14.361 "block_size": 512, 00:20:14.361 "num_blocks": 65536, 00:20:14.361 "uuid": "53968e50-f447-4e96-a540-9114363d6db3", 00:20:14.361 "assigned_rate_limits": { 00:20:14.361 "rw_ios_per_sec": 0, 00:20:14.361 "rw_mbytes_per_sec": 0, 00:20:14.361 "r_mbytes_per_sec": 0, 00:20:14.361 "w_mbytes_per_sec": 0 00:20:14.361 }, 00:20:14.361 "claimed": false, 00:20:14.361 "zoned": false, 00:20:14.361 "supported_io_types": { 00:20:14.361 "read": true, 00:20:14.361 "write": true, 00:20:14.361 "unmap": true, 00:20:14.361 "flush": true, 00:20:14.361 "reset": true, 00:20:14.361 "nvme_admin": false, 00:20:14.361 "nvme_io": false, 00:20:14.361 "nvme_io_md": false, 00:20:14.361 "write_zeroes": true, 00:20:14.361 "zcopy": true, 00:20:14.361 "get_zone_info": false, 00:20:14.361 "zone_management": false, 00:20:14.361 "zone_append": false, 00:20:14.361 "compare": false, 00:20:14.361 "compare_and_write": false, 00:20:14.361 "abort": true, 00:20:14.361 "seek_hole": false, 00:20:14.361 "seek_data": false, 00:20:14.361 "copy": true, 00:20:14.361 "nvme_iov_md": false 00:20:14.361 }, 00:20:14.361 "memory_domains": [ 00:20:14.361 { 00:20:14.361 "dma_device_id": "system", 00:20:14.361 "dma_device_type": 1 00:20:14.361 }, 00:20:14.361 { 00:20:14.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.361 "dma_device_type": 2 00:20:14.361 } 00:20:14.361 ], 00:20:14.361 "driver_specific": {} 00:20:14.361 } 00:20:14.361 ] 00:20:14.361 04:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:14.361 04:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:14.361 04:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:14.361 04:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:14.625 BaseBdev3 00:20:14.625 04:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:14.625 04:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:14.625 04:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:14.625 04:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:14.625 04:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:14.625 04:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:14.625 04:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:14.885 04:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:15.143 [ 00:20:15.143 { 00:20:15.143 "name": "BaseBdev3", 00:20:15.143 "aliases": [ 00:20:15.143 "34155815-9701-4f46-8f45-be1ace1d7b0a" 00:20:15.143 ], 00:20:15.143 "product_name": "Malloc disk", 00:20:15.143 "block_size": 512, 00:20:15.143 "num_blocks": 65536, 00:20:15.143 "uuid": "34155815-9701-4f46-8f45-be1ace1d7b0a", 00:20:15.143 "assigned_rate_limits": { 00:20:15.143 "rw_ios_per_sec": 0, 00:20:15.143 "rw_mbytes_per_sec": 0, 00:20:15.143 "r_mbytes_per_sec": 0, 00:20:15.143 "w_mbytes_per_sec": 0 00:20:15.143 }, 00:20:15.143 "claimed": false, 00:20:15.143 "zoned": false, 00:20:15.143 "supported_io_types": { 00:20:15.143 "read": true, 00:20:15.143 "write": true, 00:20:15.143 "unmap": true, 00:20:15.143 "flush": true, 00:20:15.143 "reset": true, 00:20:15.143 "nvme_admin": false, 00:20:15.143 "nvme_io": false, 00:20:15.143 "nvme_io_md": false, 00:20:15.143 "write_zeroes": true, 00:20:15.143 "zcopy": true, 00:20:15.143 "get_zone_info": false, 00:20:15.143 "zone_management": false, 00:20:15.143 "zone_append": false, 00:20:15.143 "compare": false, 00:20:15.143 "compare_and_write": false, 00:20:15.143 "abort": true, 00:20:15.143 "seek_hole": false, 00:20:15.143 "seek_data": false, 00:20:15.143 "copy": true, 00:20:15.143 "nvme_iov_md": false 00:20:15.143 }, 00:20:15.143 "memory_domains": [ 00:20:15.143 { 00:20:15.143 "dma_device_id": "system", 00:20:15.143 "dma_device_type": 1 00:20:15.143 }, 00:20:15.143 { 00:20:15.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.143 "dma_device_type": 2 00:20:15.143 } 00:20:15.143 ], 00:20:15.143 "driver_specific": {} 00:20:15.143 } 00:20:15.143 ] 00:20:15.143 04:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:15.143 04:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:15.143 04:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:15.143 04:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:15.403 [2024-07-23 04:15:24.013185] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:15.403 [2024-07-23 04:15:24.013239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:15.403 [2024-07-23 04:15:24.013270] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:15.403 [2024-07-23 04:15:24.015577] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:15.403 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:15.403 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:15.403 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:15.403 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:15.403 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:15.403 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:15.403 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:15.403 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:15.403 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:15.403 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:15.403 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.403 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.662 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:15.662 "name": "Existed_Raid", 00:20:15.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.662 "strip_size_kb": 0, 00:20:15.662 "state": "configuring", 00:20:15.662 "raid_level": "raid1", 00:20:15.662 "superblock": false, 00:20:15.662 "num_base_bdevs": 3, 00:20:15.662 "num_base_bdevs_discovered": 2, 00:20:15.662 "num_base_bdevs_operational": 3, 00:20:15.662 "base_bdevs_list": [ 00:20:15.662 { 00:20:15.662 "name": "BaseBdev1", 00:20:15.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.662 "is_configured": false, 00:20:15.662 "data_offset": 0, 00:20:15.662 "data_size": 0 00:20:15.662 }, 00:20:15.662 { 00:20:15.662 "name": "BaseBdev2", 00:20:15.662 "uuid": "53968e50-f447-4e96-a540-9114363d6db3", 00:20:15.662 "is_configured": true, 00:20:15.662 "data_offset": 0, 00:20:15.662 "data_size": 65536 00:20:15.662 }, 00:20:15.662 { 00:20:15.662 "name": "BaseBdev3", 00:20:15.662 "uuid": "34155815-9701-4f46-8f45-be1ace1d7b0a", 00:20:15.662 "is_configured": true, 00:20:15.662 "data_offset": 0, 00:20:15.662 "data_size": 65536 00:20:15.662 } 00:20:15.662 ] 00:20:15.662 }' 00:20:15.662 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:15.662 04:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.231 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:16.231 [2024-07-23 04:15:24.975761] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:16.231 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:16.231 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:16.231 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:16.231 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:16.231 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:16.231 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:16.231 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:16.231 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:16.231 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:16.231 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:16.231 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.231 04:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.490 04:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:16.490 "name": "Existed_Raid", 00:20:16.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.490 "strip_size_kb": 0, 00:20:16.490 "state": "configuring", 00:20:16.490 "raid_level": "raid1", 00:20:16.490 "superblock": false, 00:20:16.490 "num_base_bdevs": 3, 00:20:16.490 "num_base_bdevs_discovered": 1, 00:20:16.490 "num_base_bdevs_operational": 3, 00:20:16.490 "base_bdevs_list": [ 00:20:16.490 { 00:20:16.490 "name": "BaseBdev1", 00:20:16.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.490 "is_configured": false, 00:20:16.490 "data_offset": 0, 00:20:16.490 "data_size": 0 00:20:16.490 }, 00:20:16.490 { 00:20:16.490 "name": null, 00:20:16.490 "uuid": "53968e50-f447-4e96-a540-9114363d6db3", 00:20:16.490 "is_configured": false, 00:20:16.490 "data_offset": 0, 00:20:16.490 "data_size": 65536 00:20:16.490 }, 00:20:16.490 { 00:20:16.490 "name": "BaseBdev3", 00:20:16.490 "uuid": "34155815-9701-4f46-8f45-be1ace1d7b0a", 00:20:16.490 "is_configured": true, 00:20:16.490 "data_offset": 0, 00:20:16.490 "data_size": 65536 00:20:16.490 } 00:20:16.490 ] 00:20:16.490 }' 00:20:16.490 04:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:16.490 04:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.058 04:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.058 04:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:17.317 04:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:17.317 04:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:17.575 [2024-07-23 04:15:26.279948] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:17.575 BaseBdev1 00:20:17.575 04:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:17.575 04:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:17.575 04:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:17.575 04:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:17.575 04:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:17.575 04:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:17.575 04:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:17.833 04:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:18.091 [ 00:20:18.091 { 00:20:18.091 "name": "BaseBdev1", 00:20:18.091 "aliases": [ 00:20:18.091 "75dacf5f-9f47-4154-a933-d00d37698166" 00:20:18.091 ], 00:20:18.091 "product_name": "Malloc disk", 00:20:18.091 "block_size": 512, 00:20:18.091 "num_blocks": 65536, 00:20:18.091 "uuid": "75dacf5f-9f47-4154-a933-d00d37698166", 00:20:18.091 "assigned_rate_limits": { 00:20:18.091 "rw_ios_per_sec": 0, 00:20:18.091 "rw_mbytes_per_sec": 0, 00:20:18.091 "r_mbytes_per_sec": 0, 00:20:18.091 "w_mbytes_per_sec": 0 00:20:18.091 }, 00:20:18.091 "claimed": true, 00:20:18.091 "claim_type": "exclusive_write", 00:20:18.091 "zoned": false, 00:20:18.091 "supported_io_types": { 00:20:18.091 "read": true, 00:20:18.091 "write": true, 00:20:18.091 "unmap": true, 00:20:18.091 "flush": true, 00:20:18.091 "reset": true, 00:20:18.091 "nvme_admin": false, 00:20:18.091 "nvme_io": false, 00:20:18.091 "nvme_io_md": false, 00:20:18.091 "write_zeroes": true, 00:20:18.091 "zcopy": true, 00:20:18.091 "get_zone_info": false, 00:20:18.091 "zone_management": false, 00:20:18.091 "zone_append": false, 00:20:18.091 "compare": false, 00:20:18.091 "compare_and_write": false, 00:20:18.091 "abort": true, 00:20:18.091 "seek_hole": false, 00:20:18.091 "seek_data": false, 00:20:18.091 "copy": true, 00:20:18.091 "nvme_iov_md": false 00:20:18.091 }, 00:20:18.091 "memory_domains": [ 00:20:18.091 { 00:20:18.091 "dma_device_id": "system", 00:20:18.091 "dma_device_type": 1 00:20:18.091 }, 00:20:18.091 { 00:20:18.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.091 "dma_device_type": 2 00:20:18.091 } 00:20:18.091 ], 00:20:18.091 "driver_specific": {} 00:20:18.091 } 00:20:18.091 ] 00:20:18.091 04:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:18.091 04:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:18.091 04:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:18.091 04:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:18.091 04:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:18.091 04:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:18.091 04:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:18.091 04:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:18.092 04:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:18.092 04:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:18.092 04:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:18.092 04:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.092 04:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.350 04:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:18.350 "name": "Existed_Raid", 00:20:18.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.350 "strip_size_kb": 0, 00:20:18.350 "state": "configuring", 00:20:18.350 "raid_level": "raid1", 00:20:18.350 "superblock": false, 00:20:18.350 "num_base_bdevs": 3, 00:20:18.350 "num_base_bdevs_discovered": 2, 00:20:18.350 "num_base_bdevs_operational": 3, 00:20:18.350 "base_bdevs_list": [ 00:20:18.350 { 00:20:18.350 "name": "BaseBdev1", 00:20:18.350 "uuid": "75dacf5f-9f47-4154-a933-d00d37698166", 00:20:18.350 "is_configured": true, 00:20:18.350 "data_offset": 0, 00:20:18.350 "data_size": 65536 00:20:18.350 }, 00:20:18.350 { 00:20:18.350 "name": null, 00:20:18.350 "uuid": "53968e50-f447-4e96-a540-9114363d6db3", 00:20:18.350 "is_configured": false, 00:20:18.350 "data_offset": 0, 00:20:18.350 "data_size": 65536 00:20:18.350 }, 00:20:18.350 { 00:20:18.350 "name": "BaseBdev3", 00:20:18.350 "uuid": "34155815-9701-4f46-8f45-be1ace1d7b0a", 00:20:18.350 "is_configured": true, 00:20:18.350 "data_offset": 0, 00:20:18.350 "data_size": 65536 00:20:18.350 } 00:20:18.350 ] 00:20:18.350 }' 00:20:18.350 04:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:18.350 04:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.917 04:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.917 04:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:19.175 04:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:19.175 04:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:19.175 [2024-07-23 04:15:27.948544] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:19.432 04:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:19.432 04:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:19.432 04:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:19.432 04:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:19.432 04:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:19.432 04:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:19.432 04:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:19.432 04:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:19.432 04:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:19.432 04:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:19.432 04:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.433 04:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:19.433 04:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:19.433 "name": "Existed_Raid", 00:20:19.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.433 "strip_size_kb": 0, 00:20:19.433 "state": "configuring", 00:20:19.433 "raid_level": "raid1", 00:20:19.433 "superblock": false, 00:20:19.433 "num_base_bdevs": 3, 00:20:19.433 "num_base_bdevs_discovered": 1, 00:20:19.433 "num_base_bdevs_operational": 3, 00:20:19.433 "base_bdevs_list": [ 00:20:19.433 { 00:20:19.433 "name": "BaseBdev1", 00:20:19.433 "uuid": "75dacf5f-9f47-4154-a933-d00d37698166", 00:20:19.433 "is_configured": true, 00:20:19.433 "data_offset": 0, 00:20:19.433 "data_size": 65536 00:20:19.433 }, 00:20:19.433 { 00:20:19.433 "name": null, 00:20:19.433 "uuid": "53968e50-f447-4e96-a540-9114363d6db3", 00:20:19.433 "is_configured": false, 00:20:19.433 "data_offset": 0, 00:20:19.433 "data_size": 65536 00:20:19.433 }, 00:20:19.433 { 00:20:19.433 "name": null, 00:20:19.433 "uuid": "34155815-9701-4f46-8f45-be1ace1d7b0a", 00:20:19.433 "is_configured": false, 00:20:19.433 "data_offset": 0, 00:20:19.433 "data_size": 65536 00:20:19.433 } 00:20:19.433 ] 00:20:19.433 }' 00:20:19.433 04:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:19.433 04:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.998 04:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.998 04:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:20.256 04:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:20.256 04:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:20.513 [2024-07-23 04:15:29.159845] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:20.513 04:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:20.513 04:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:20.513 04:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:20.513 04:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:20.513 04:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:20.513 04:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:20.513 04:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:20.513 04:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:20.513 04:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:20.513 04:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:20.513 04:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.513 04:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:20.771 04:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:20.771 "name": "Existed_Raid", 00:20:20.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.771 "strip_size_kb": 0, 00:20:20.771 "state": "configuring", 00:20:20.771 "raid_level": "raid1", 00:20:20.771 "superblock": false, 00:20:20.771 "num_base_bdevs": 3, 00:20:20.771 "num_base_bdevs_discovered": 2, 00:20:20.771 "num_base_bdevs_operational": 3, 00:20:20.771 "base_bdevs_list": [ 00:20:20.771 { 00:20:20.771 "name": "BaseBdev1", 00:20:20.771 "uuid": "75dacf5f-9f47-4154-a933-d00d37698166", 00:20:20.771 "is_configured": true, 00:20:20.771 "data_offset": 0, 00:20:20.771 "data_size": 65536 00:20:20.771 }, 00:20:20.771 { 00:20:20.771 "name": null, 00:20:20.771 "uuid": "53968e50-f447-4e96-a540-9114363d6db3", 00:20:20.771 "is_configured": false, 00:20:20.771 "data_offset": 0, 00:20:20.771 "data_size": 65536 00:20:20.771 }, 00:20:20.771 { 00:20:20.771 "name": "BaseBdev3", 00:20:20.771 "uuid": "34155815-9701-4f46-8f45-be1ace1d7b0a", 00:20:20.771 "is_configured": true, 00:20:20.771 "data_offset": 0, 00:20:20.771 "data_size": 65536 00:20:20.771 } 00:20:20.771 ] 00:20:20.771 }' 00:20:20.771 04:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:20.771 04:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.336 04:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.336 04:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:21.593 04:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:21.593 04:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:21.851 [2024-07-23 04:15:30.427282] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:21.851 04:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:21.851 04:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:21.851 04:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:21.851 04:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:21.851 04:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:21.851 04:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:21.851 04:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:21.851 04:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:21.851 04:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:21.851 04:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:21.851 04:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.851 04:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:22.109 04:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:22.109 "name": "Existed_Raid", 00:20:22.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.109 "strip_size_kb": 0, 00:20:22.109 "state": "configuring", 00:20:22.109 "raid_level": "raid1", 00:20:22.109 "superblock": false, 00:20:22.109 "num_base_bdevs": 3, 00:20:22.109 "num_base_bdevs_discovered": 1, 00:20:22.109 "num_base_bdevs_operational": 3, 00:20:22.109 "base_bdevs_list": [ 00:20:22.109 { 00:20:22.109 "name": null, 00:20:22.109 "uuid": "75dacf5f-9f47-4154-a933-d00d37698166", 00:20:22.109 "is_configured": false, 00:20:22.109 "data_offset": 0, 00:20:22.109 "data_size": 65536 00:20:22.109 }, 00:20:22.109 { 00:20:22.109 "name": null, 00:20:22.109 "uuid": "53968e50-f447-4e96-a540-9114363d6db3", 00:20:22.109 "is_configured": false, 00:20:22.109 "data_offset": 0, 00:20:22.109 "data_size": 65536 00:20:22.109 }, 00:20:22.109 { 00:20:22.109 "name": "BaseBdev3", 00:20:22.109 "uuid": "34155815-9701-4f46-8f45-be1ace1d7b0a", 00:20:22.109 "is_configured": true, 00:20:22.109 "data_offset": 0, 00:20:22.109 "data_size": 65536 00:20:22.109 } 00:20:22.109 ] 00:20:22.109 }' 00:20:22.109 04:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:22.109 04:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.674 04:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.674 04:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:22.932 04:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:22.932 04:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:23.191 [2024-07-23 04:15:31.756337] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:23.191 04:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:23.191 04:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:23.191 04:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:23.191 04:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:23.191 04:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:23.191 04:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:23.191 04:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:23.191 04:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:23.191 04:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:23.191 04:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:23.191 04:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.191 04:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.449 04:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:23.449 "name": "Existed_Raid", 00:20:23.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.449 "strip_size_kb": 0, 00:20:23.449 "state": "configuring", 00:20:23.449 "raid_level": "raid1", 00:20:23.449 "superblock": false, 00:20:23.449 "num_base_bdevs": 3, 00:20:23.449 "num_base_bdevs_discovered": 2, 00:20:23.449 "num_base_bdevs_operational": 3, 00:20:23.449 "base_bdevs_list": [ 00:20:23.449 { 00:20:23.449 "name": null, 00:20:23.449 "uuid": "75dacf5f-9f47-4154-a933-d00d37698166", 00:20:23.449 "is_configured": false, 00:20:23.449 "data_offset": 0, 00:20:23.449 "data_size": 65536 00:20:23.449 }, 00:20:23.449 { 00:20:23.449 "name": "BaseBdev2", 00:20:23.449 "uuid": "53968e50-f447-4e96-a540-9114363d6db3", 00:20:23.449 "is_configured": true, 00:20:23.449 "data_offset": 0, 00:20:23.449 "data_size": 65536 00:20:23.449 }, 00:20:23.449 { 00:20:23.449 "name": "BaseBdev3", 00:20:23.449 "uuid": "34155815-9701-4f46-8f45-be1ace1d7b0a", 00:20:23.449 "is_configured": true, 00:20:23.449 "data_offset": 0, 00:20:23.449 "data_size": 65536 00:20:23.449 } 00:20:23.449 ] 00:20:23.449 }' 00:20:23.449 04:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:23.449 04:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.016 04:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:24.016 04:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.016 04:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:24.016 04:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.016 04:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:24.274 04:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 75dacf5f-9f47-4154-a933-d00d37698166 00:20:24.532 [2024-07-23 04:15:33.245104] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:24.532 [2024-07-23 04:15:33.245173] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041780 00:20:24.532 [2024-07-23 04:15:33.245187] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:24.532 [2024-07-23 04:15:33.245534] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:20:24.532 [2024-07-23 04:15:33.245743] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041780 00:20:24.532 [2024-07-23 04:15:33.245761] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000041780 00:20:24.532 [2024-07-23 04:15:33.246068] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.532 NewBaseBdev 00:20:24.532 04:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:24.532 04:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:20:24.532 04:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:24.532 04:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:24.532 04:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:24.532 04:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:24.532 04:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:24.789 04:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:25.047 [ 00:20:25.047 { 00:20:25.047 "name": "NewBaseBdev", 00:20:25.047 "aliases": [ 00:20:25.047 "75dacf5f-9f47-4154-a933-d00d37698166" 00:20:25.047 ], 00:20:25.047 "product_name": "Malloc disk", 00:20:25.047 "block_size": 512, 00:20:25.047 "num_blocks": 65536, 00:20:25.047 "uuid": "75dacf5f-9f47-4154-a933-d00d37698166", 00:20:25.047 "assigned_rate_limits": { 00:20:25.047 "rw_ios_per_sec": 0, 00:20:25.047 "rw_mbytes_per_sec": 0, 00:20:25.047 "r_mbytes_per_sec": 0, 00:20:25.047 "w_mbytes_per_sec": 0 00:20:25.047 }, 00:20:25.047 "claimed": true, 00:20:25.047 "claim_type": "exclusive_write", 00:20:25.047 "zoned": false, 00:20:25.047 "supported_io_types": { 00:20:25.047 "read": true, 00:20:25.047 "write": true, 00:20:25.047 "unmap": true, 00:20:25.047 "flush": true, 00:20:25.047 "reset": true, 00:20:25.047 "nvme_admin": false, 00:20:25.047 "nvme_io": false, 00:20:25.047 "nvme_io_md": false, 00:20:25.047 "write_zeroes": true, 00:20:25.047 "zcopy": true, 00:20:25.047 "get_zone_info": false, 00:20:25.047 "zone_management": false, 00:20:25.047 "zone_append": false, 00:20:25.047 "compare": false, 00:20:25.047 "compare_and_write": false, 00:20:25.047 "abort": true, 00:20:25.047 "seek_hole": false, 00:20:25.047 "seek_data": false, 00:20:25.047 "copy": true, 00:20:25.047 "nvme_iov_md": false 00:20:25.047 }, 00:20:25.047 "memory_domains": [ 00:20:25.047 { 00:20:25.047 "dma_device_id": "system", 00:20:25.048 "dma_device_type": 1 00:20:25.048 }, 00:20:25.048 { 00:20:25.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.048 "dma_device_type": 2 00:20:25.048 } 00:20:25.048 ], 00:20:25.048 "driver_specific": {} 00:20:25.048 } 00:20:25.048 ] 00:20:25.048 04:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:25.048 04:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:25.048 04:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:25.048 04:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:25.048 04:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:25.048 04:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:25.048 04:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:25.048 04:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:25.048 04:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:25.048 04:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:25.048 04:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:25.048 04:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.048 04:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:25.306 04:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:25.306 "name": "Existed_Raid", 00:20:25.306 "uuid": "8c20882e-ff15-4cc4-8b49-bc185cea90fd", 00:20:25.306 "strip_size_kb": 0, 00:20:25.306 "state": "online", 00:20:25.306 "raid_level": "raid1", 00:20:25.306 "superblock": false, 00:20:25.306 "num_base_bdevs": 3, 00:20:25.306 "num_base_bdevs_discovered": 3, 00:20:25.306 "num_base_bdevs_operational": 3, 00:20:25.306 "base_bdevs_list": [ 00:20:25.306 { 00:20:25.306 "name": "NewBaseBdev", 00:20:25.306 "uuid": "75dacf5f-9f47-4154-a933-d00d37698166", 00:20:25.306 "is_configured": true, 00:20:25.306 "data_offset": 0, 00:20:25.306 "data_size": 65536 00:20:25.306 }, 00:20:25.306 { 00:20:25.306 "name": "BaseBdev2", 00:20:25.306 "uuid": "53968e50-f447-4e96-a540-9114363d6db3", 00:20:25.306 "is_configured": true, 00:20:25.306 "data_offset": 0, 00:20:25.306 "data_size": 65536 00:20:25.306 }, 00:20:25.306 { 00:20:25.306 "name": "BaseBdev3", 00:20:25.306 "uuid": "34155815-9701-4f46-8f45-be1ace1d7b0a", 00:20:25.306 "is_configured": true, 00:20:25.306 "data_offset": 0, 00:20:25.306 "data_size": 65536 00:20:25.306 } 00:20:25.306 ] 00:20:25.306 }' 00:20:25.306 04:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:25.306 04:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.872 04:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:25.872 04:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:25.872 04:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:25.872 04:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:25.872 04:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:25.872 04:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:25.872 04:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:25.872 04:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:26.131 [2024-07-23 04:15:34.697516] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:26.131 04:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:26.131 "name": "Existed_Raid", 00:20:26.131 "aliases": [ 00:20:26.131 "8c20882e-ff15-4cc4-8b49-bc185cea90fd" 00:20:26.131 ], 00:20:26.131 "product_name": "Raid Volume", 00:20:26.131 "block_size": 512, 00:20:26.131 "num_blocks": 65536, 00:20:26.131 "uuid": "8c20882e-ff15-4cc4-8b49-bc185cea90fd", 00:20:26.131 "assigned_rate_limits": { 00:20:26.131 "rw_ios_per_sec": 0, 00:20:26.131 "rw_mbytes_per_sec": 0, 00:20:26.131 "r_mbytes_per_sec": 0, 00:20:26.131 "w_mbytes_per_sec": 0 00:20:26.131 }, 00:20:26.131 "claimed": false, 00:20:26.131 "zoned": false, 00:20:26.131 "supported_io_types": { 00:20:26.131 "read": true, 00:20:26.131 "write": true, 00:20:26.131 "unmap": false, 00:20:26.131 "flush": false, 00:20:26.131 "reset": true, 00:20:26.131 "nvme_admin": false, 00:20:26.131 "nvme_io": false, 00:20:26.131 "nvme_io_md": false, 00:20:26.131 "write_zeroes": true, 00:20:26.131 "zcopy": false, 00:20:26.131 "get_zone_info": false, 00:20:26.131 "zone_management": false, 00:20:26.131 "zone_append": false, 00:20:26.131 "compare": false, 00:20:26.131 "compare_and_write": false, 00:20:26.131 "abort": false, 00:20:26.131 "seek_hole": false, 00:20:26.131 "seek_data": false, 00:20:26.131 "copy": false, 00:20:26.131 "nvme_iov_md": false 00:20:26.131 }, 00:20:26.131 "memory_domains": [ 00:20:26.131 { 00:20:26.131 "dma_device_id": "system", 00:20:26.131 "dma_device_type": 1 00:20:26.131 }, 00:20:26.131 { 00:20:26.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.131 "dma_device_type": 2 00:20:26.131 }, 00:20:26.131 { 00:20:26.131 "dma_device_id": "system", 00:20:26.131 "dma_device_type": 1 00:20:26.131 }, 00:20:26.131 { 00:20:26.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.131 "dma_device_type": 2 00:20:26.131 }, 00:20:26.131 { 00:20:26.131 "dma_device_id": "system", 00:20:26.131 "dma_device_type": 1 00:20:26.131 }, 00:20:26.131 { 00:20:26.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.131 "dma_device_type": 2 00:20:26.131 } 00:20:26.131 ], 00:20:26.131 "driver_specific": { 00:20:26.131 "raid": { 00:20:26.131 "uuid": "8c20882e-ff15-4cc4-8b49-bc185cea90fd", 00:20:26.131 "strip_size_kb": 0, 00:20:26.131 "state": "online", 00:20:26.131 "raid_level": "raid1", 00:20:26.131 "superblock": false, 00:20:26.131 "num_base_bdevs": 3, 00:20:26.131 "num_base_bdevs_discovered": 3, 00:20:26.131 "num_base_bdevs_operational": 3, 00:20:26.131 "base_bdevs_list": [ 00:20:26.131 { 00:20:26.131 "name": "NewBaseBdev", 00:20:26.131 "uuid": "75dacf5f-9f47-4154-a933-d00d37698166", 00:20:26.131 "is_configured": true, 00:20:26.131 "data_offset": 0, 00:20:26.131 "data_size": 65536 00:20:26.131 }, 00:20:26.131 { 00:20:26.131 "name": "BaseBdev2", 00:20:26.131 "uuid": "53968e50-f447-4e96-a540-9114363d6db3", 00:20:26.131 "is_configured": true, 00:20:26.131 "data_offset": 0, 00:20:26.131 "data_size": 65536 00:20:26.131 }, 00:20:26.131 { 00:20:26.131 "name": "BaseBdev3", 00:20:26.131 "uuid": "34155815-9701-4f46-8f45-be1ace1d7b0a", 00:20:26.131 "is_configured": true, 00:20:26.131 "data_offset": 0, 00:20:26.131 "data_size": 65536 00:20:26.131 } 00:20:26.131 ] 00:20:26.131 } 00:20:26.131 } 00:20:26.131 }' 00:20:26.131 04:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:26.131 04:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:26.131 BaseBdev2 00:20:26.131 BaseBdev3' 00:20:26.131 04:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:26.131 04:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:26.131 04:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:26.389 04:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:26.389 "name": "NewBaseBdev", 00:20:26.389 "aliases": [ 00:20:26.389 "75dacf5f-9f47-4154-a933-d00d37698166" 00:20:26.389 ], 00:20:26.389 "product_name": "Malloc disk", 00:20:26.389 "block_size": 512, 00:20:26.389 "num_blocks": 65536, 00:20:26.389 "uuid": "75dacf5f-9f47-4154-a933-d00d37698166", 00:20:26.389 "assigned_rate_limits": { 00:20:26.389 "rw_ios_per_sec": 0, 00:20:26.389 "rw_mbytes_per_sec": 0, 00:20:26.389 "r_mbytes_per_sec": 0, 00:20:26.389 "w_mbytes_per_sec": 0 00:20:26.389 }, 00:20:26.389 "claimed": true, 00:20:26.389 "claim_type": "exclusive_write", 00:20:26.389 "zoned": false, 00:20:26.389 "supported_io_types": { 00:20:26.389 "read": true, 00:20:26.389 "write": true, 00:20:26.389 "unmap": true, 00:20:26.389 "flush": true, 00:20:26.389 "reset": true, 00:20:26.389 "nvme_admin": false, 00:20:26.389 "nvme_io": false, 00:20:26.389 "nvme_io_md": false, 00:20:26.389 "write_zeroes": true, 00:20:26.389 "zcopy": true, 00:20:26.389 "get_zone_info": false, 00:20:26.389 "zone_management": false, 00:20:26.389 "zone_append": false, 00:20:26.389 "compare": false, 00:20:26.390 "compare_and_write": false, 00:20:26.390 "abort": true, 00:20:26.390 "seek_hole": false, 00:20:26.390 "seek_data": false, 00:20:26.390 "copy": true, 00:20:26.390 "nvme_iov_md": false 00:20:26.390 }, 00:20:26.390 "memory_domains": [ 00:20:26.390 { 00:20:26.390 "dma_device_id": "system", 00:20:26.390 "dma_device_type": 1 00:20:26.390 }, 00:20:26.390 { 00:20:26.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.390 "dma_device_type": 2 00:20:26.390 } 00:20:26.390 ], 00:20:26.390 "driver_specific": {} 00:20:26.390 }' 00:20:26.390 04:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:26.390 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:26.390 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:26.390 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:26.390 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:26.390 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:26.390 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:26.390 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:26.648 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:26.648 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:26.648 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:26.648 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:26.648 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:26.648 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:26.648 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:26.907 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:26.907 "name": "BaseBdev2", 00:20:26.907 "aliases": [ 00:20:26.907 "53968e50-f447-4e96-a540-9114363d6db3" 00:20:26.907 ], 00:20:26.907 "product_name": "Malloc disk", 00:20:26.907 "block_size": 512, 00:20:26.907 "num_blocks": 65536, 00:20:26.907 "uuid": "53968e50-f447-4e96-a540-9114363d6db3", 00:20:26.907 "assigned_rate_limits": { 00:20:26.907 "rw_ios_per_sec": 0, 00:20:26.907 "rw_mbytes_per_sec": 0, 00:20:26.907 "r_mbytes_per_sec": 0, 00:20:26.907 "w_mbytes_per_sec": 0 00:20:26.907 }, 00:20:26.907 "claimed": true, 00:20:26.907 "claim_type": "exclusive_write", 00:20:26.907 "zoned": false, 00:20:26.907 "supported_io_types": { 00:20:26.907 "read": true, 00:20:26.907 "write": true, 00:20:26.907 "unmap": true, 00:20:26.907 "flush": true, 00:20:26.907 "reset": true, 00:20:26.907 "nvme_admin": false, 00:20:26.907 "nvme_io": false, 00:20:26.907 "nvme_io_md": false, 00:20:26.907 "write_zeroes": true, 00:20:26.907 "zcopy": true, 00:20:26.907 "get_zone_info": false, 00:20:26.907 "zone_management": false, 00:20:26.907 "zone_append": false, 00:20:26.907 "compare": false, 00:20:26.907 "compare_and_write": false, 00:20:26.907 "abort": true, 00:20:26.907 "seek_hole": false, 00:20:26.907 "seek_data": false, 00:20:26.907 "copy": true, 00:20:26.907 "nvme_iov_md": false 00:20:26.907 }, 00:20:26.907 "memory_domains": [ 00:20:26.907 { 00:20:26.907 "dma_device_id": "system", 00:20:26.907 "dma_device_type": 1 00:20:26.907 }, 00:20:26.907 { 00:20:26.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.907 "dma_device_type": 2 00:20:26.907 } 00:20:26.907 ], 00:20:26.907 "driver_specific": {} 00:20:26.907 }' 00:20:26.907 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:26.907 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:26.907 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:26.907 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:26.907 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:26.907 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:26.907 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.202 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.202 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:27.202 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.202 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.202 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:27.202 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:27.202 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:27.202 04:15:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:27.461 04:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:27.461 "name": "BaseBdev3", 00:20:27.461 "aliases": [ 00:20:27.461 "34155815-9701-4f46-8f45-be1ace1d7b0a" 00:20:27.461 ], 00:20:27.461 "product_name": "Malloc disk", 00:20:27.461 "block_size": 512, 00:20:27.461 "num_blocks": 65536, 00:20:27.461 "uuid": "34155815-9701-4f46-8f45-be1ace1d7b0a", 00:20:27.461 "assigned_rate_limits": { 00:20:27.461 "rw_ios_per_sec": 0, 00:20:27.461 "rw_mbytes_per_sec": 0, 00:20:27.461 "r_mbytes_per_sec": 0, 00:20:27.461 "w_mbytes_per_sec": 0 00:20:27.461 }, 00:20:27.461 "claimed": true, 00:20:27.461 "claim_type": "exclusive_write", 00:20:27.461 "zoned": false, 00:20:27.461 "supported_io_types": { 00:20:27.461 "read": true, 00:20:27.461 "write": true, 00:20:27.461 "unmap": true, 00:20:27.461 "flush": true, 00:20:27.461 "reset": true, 00:20:27.461 "nvme_admin": false, 00:20:27.461 "nvme_io": false, 00:20:27.461 "nvme_io_md": false, 00:20:27.461 "write_zeroes": true, 00:20:27.461 "zcopy": true, 00:20:27.461 "get_zone_info": false, 00:20:27.461 "zone_management": false, 00:20:27.461 "zone_append": false, 00:20:27.461 "compare": false, 00:20:27.461 "compare_and_write": false, 00:20:27.461 "abort": true, 00:20:27.461 "seek_hole": false, 00:20:27.461 "seek_data": false, 00:20:27.461 "copy": true, 00:20:27.461 "nvme_iov_md": false 00:20:27.461 }, 00:20:27.461 "memory_domains": [ 00:20:27.461 { 00:20:27.461 "dma_device_id": "system", 00:20:27.461 "dma_device_type": 1 00:20:27.461 }, 00:20:27.461 { 00:20:27.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.461 "dma_device_type": 2 00:20:27.461 } 00:20:27.461 ], 00:20:27.461 "driver_specific": {} 00:20:27.461 }' 00:20:27.461 04:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:27.461 04:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:27.461 04:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:27.461 04:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:27.461 04:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:27.461 04:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:27.461 04:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.720 04:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.720 04:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:27.720 04:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.720 04:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.720 04:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:27.720 04:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:27.979 [2024-07-23 04:15:36.514025] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:27.979 [2024-07-23 04:15:36.514063] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:27.979 [2024-07-23 04:15:36.514161] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:27.979 [2024-07-23 04:15:36.514502] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:27.979 [2024-07-23 04:15:36.514519] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041780 name Existed_Raid, state offline 00:20:27.979 04:15:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2680035 00:20:27.979 04:15:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2680035 ']' 00:20:27.979 04:15:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2680035 00:20:27.979 04:15:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:20:27.979 04:15:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:27.979 04:15:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2680035 00:20:27.979 04:15:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:27.979 04:15:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:27.979 04:15:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2680035' 00:20:27.979 killing process with pid 2680035 00:20:27.979 04:15:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2680035 00:20:27.979 [2024-07-23 04:15:36.587761] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:27.979 04:15:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2680035 00:20:28.238 [2024-07-23 04:15:36.921638] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:30.143 04:15:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:20:30.143 00:20:30.143 real 0m29.251s 00:20:30.143 user 0m51.110s 00:20:30.143 sys 0m4.996s 00:20:30.143 04:15:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:30.143 04:15:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.143 ************************************ 00:20:30.143 END TEST raid_state_function_test 00:20:30.143 ************************************ 00:20:30.143 04:15:38 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:30.143 04:15:38 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:20:30.144 04:15:38 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:30.144 04:15:38 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:30.144 04:15:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:30.144 ************************************ 00:20:30.144 START TEST raid_state_function_test_sb 00:20:30.144 ************************************ 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 true 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2685522 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2685522' 00:20:30.144 Process raid pid: 2685522 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2685522 /var/tmp/spdk-raid.sock 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2685522 ']' 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:30.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:30.144 04:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.144 [2024-07-23 04:15:38.877853] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:30.144 [2024-07-23 04:15:38.877970] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:30.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:30.403 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:30.403 [2024-07-23 04:15:39.105899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.662 [2024-07-23 04:15:39.390397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.230 [2024-07-23 04:15:39.714017] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:31.230 [2024-07-23 04:15:39.714056] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:31.230 04:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:31.230 04:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:20:31.230 04:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:31.489 [2024-07-23 04:15:40.096995] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:31.489 [2024-07-23 04:15:40.097054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:31.489 [2024-07-23 04:15:40.097069] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:31.489 [2024-07-23 04:15:40.097086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:31.489 [2024-07-23 04:15:40.097098] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:31.489 [2024-07-23 04:15:40.097114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:31.489 04:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:31.489 04:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:31.489 04:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:31.489 04:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:31.489 04:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:31.489 04:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:31.489 04:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:31.489 04:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:31.489 04:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:31.489 04:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:31.489 04:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.489 04:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:31.747 04:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:31.747 "name": "Existed_Raid", 00:20:31.747 "uuid": "e007322d-14c1-4a67-9e80-2731bdd56676", 00:20:31.747 "strip_size_kb": 0, 00:20:31.747 "state": "configuring", 00:20:31.747 "raid_level": "raid1", 00:20:31.747 "superblock": true, 00:20:31.747 "num_base_bdevs": 3, 00:20:31.747 "num_base_bdevs_discovered": 0, 00:20:31.747 "num_base_bdevs_operational": 3, 00:20:31.747 "base_bdevs_list": [ 00:20:31.747 { 00:20:31.747 "name": "BaseBdev1", 00:20:31.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.747 "is_configured": false, 00:20:31.747 "data_offset": 0, 00:20:31.747 "data_size": 0 00:20:31.747 }, 00:20:31.747 { 00:20:31.747 "name": "BaseBdev2", 00:20:31.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.747 "is_configured": false, 00:20:31.747 "data_offset": 0, 00:20:31.747 "data_size": 0 00:20:31.747 }, 00:20:31.747 { 00:20:31.747 "name": "BaseBdev3", 00:20:31.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.747 "is_configured": false, 00:20:31.747 "data_offset": 0, 00:20:31.747 "data_size": 0 00:20:31.747 } 00:20:31.747 ] 00:20:31.747 }' 00:20:31.747 04:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:31.747 04:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.313 04:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:32.571 [2024-07-23 04:15:41.115559] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:32.571 [2024-07-23 04:15:41.115605] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:20:32.571 04:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:32.571 [2024-07-23 04:15:41.340223] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:32.571 [2024-07-23 04:15:41.340274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:32.571 [2024-07-23 04:15:41.340288] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:32.571 [2024-07-23 04:15:41.340308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:32.571 [2024-07-23 04:15:41.340319] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:32.571 [2024-07-23 04:15:41.340334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:32.829 04:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:33.086 [2024-07-23 04:15:41.622809] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:33.086 BaseBdev1 00:20:33.086 04:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:33.086 04:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:33.086 04:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:33.086 04:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:33.086 04:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:33.086 04:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:33.086 04:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:33.086 04:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:33.344 [ 00:20:33.344 { 00:20:33.344 "name": "BaseBdev1", 00:20:33.344 "aliases": [ 00:20:33.344 "d6889535-1537-4ac4-b6bb-6e69671f6c98" 00:20:33.344 ], 00:20:33.344 "product_name": "Malloc disk", 00:20:33.344 "block_size": 512, 00:20:33.344 "num_blocks": 65536, 00:20:33.344 "uuid": "d6889535-1537-4ac4-b6bb-6e69671f6c98", 00:20:33.344 "assigned_rate_limits": { 00:20:33.344 "rw_ios_per_sec": 0, 00:20:33.344 "rw_mbytes_per_sec": 0, 00:20:33.344 "r_mbytes_per_sec": 0, 00:20:33.344 "w_mbytes_per_sec": 0 00:20:33.344 }, 00:20:33.344 "claimed": true, 00:20:33.344 "claim_type": "exclusive_write", 00:20:33.344 "zoned": false, 00:20:33.344 "supported_io_types": { 00:20:33.344 "read": true, 00:20:33.344 "write": true, 00:20:33.344 "unmap": true, 00:20:33.344 "flush": true, 00:20:33.344 "reset": true, 00:20:33.344 "nvme_admin": false, 00:20:33.344 "nvme_io": false, 00:20:33.344 "nvme_io_md": false, 00:20:33.344 "write_zeroes": true, 00:20:33.344 "zcopy": true, 00:20:33.344 "get_zone_info": false, 00:20:33.344 "zone_management": false, 00:20:33.344 "zone_append": false, 00:20:33.344 "compare": false, 00:20:33.344 "compare_and_write": false, 00:20:33.344 "abort": true, 00:20:33.344 "seek_hole": false, 00:20:33.344 "seek_data": false, 00:20:33.344 "copy": true, 00:20:33.344 "nvme_iov_md": false 00:20:33.344 }, 00:20:33.344 "memory_domains": [ 00:20:33.344 { 00:20:33.344 "dma_device_id": "system", 00:20:33.345 "dma_device_type": 1 00:20:33.345 }, 00:20:33.345 { 00:20:33.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.345 "dma_device_type": 2 00:20:33.345 } 00:20:33.345 ], 00:20:33.345 "driver_specific": {} 00:20:33.345 } 00:20:33.345 ] 00:20:33.345 04:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:33.345 04:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:33.345 04:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:33.345 04:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:33.345 04:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:33.345 04:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:33.345 04:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:33.345 04:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:33.345 04:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:33.345 04:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:33.345 04:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:33.345 04:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.345 04:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.602 04:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:33.602 "name": "Existed_Raid", 00:20:33.602 "uuid": "d205c80e-c8fc-45ad-a9e3-bdd3e7d577c7", 00:20:33.602 "strip_size_kb": 0, 00:20:33.602 "state": "configuring", 00:20:33.602 "raid_level": "raid1", 00:20:33.602 "superblock": true, 00:20:33.602 "num_base_bdevs": 3, 00:20:33.602 "num_base_bdevs_discovered": 1, 00:20:33.602 "num_base_bdevs_operational": 3, 00:20:33.602 "base_bdevs_list": [ 00:20:33.602 { 00:20:33.602 "name": "BaseBdev1", 00:20:33.602 "uuid": "d6889535-1537-4ac4-b6bb-6e69671f6c98", 00:20:33.602 "is_configured": true, 00:20:33.602 "data_offset": 2048, 00:20:33.602 "data_size": 63488 00:20:33.602 }, 00:20:33.602 { 00:20:33.602 "name": "BaseBdev2", 00:20:33.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.602 "is_configured": false, 00:20:33.602 "data_offset": 0, 00:20:33.602 "data_size": 0 00:20:33.602 }, 00:20:33.602 { 00:20:33.602 "name": "BaseBdev3", 00:20:33.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.602 "is_configured": false, 00:20:33.602 "data_offset": 0, 00:20:33.602 "data_size": 0 00:20:33.602 } 00:20:33.602 ] 00:20:33.602 }' 00:20:33.602 04:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:33.602 04:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.168 04:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:34.426 [2024-07-23 04:15:43.082764] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:34.426 [2024-07-23 04:15:43.082828] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:20:34.426 04:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:34.683 [2024-07-23 04:15:43.311495] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:34.683 [2024-07-23 04:15:43.313871] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:34.683 [2024-07-23 04:15:43.313919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:34.683 [2024-07-23 04:15:43.313934] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:34.683 [2024-07-23 04:15:43.313950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:34.683 04:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:34.683 04:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:34.683 04:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:34.683 04:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:34.683 04:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:34.683 04:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:34.683 04:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:34.683 04:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:34.683 04:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:34.683 04:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:34.683 04:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:34.683 04:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:34.683 04:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.683 04:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:34.940 04:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:34.940 "name": "Existed_Raid", 00:20:34.940 "uuid": "852685e9-b796-4428-8d51-9ab1bec89837", 00:20:34.940 "strip_size_kb": 0, 00:20:34.940 "state": "configuring", 00:20:34.940 "raid_level": "raid1", 00:20:34.940 "superblock": true, 00:20:34.940 "num_base_bdevs": 3, 00:20:34.940 "num_base_bdevs_discovered": 1, 00:20:34.940 "num_base_bdevs_operational": 3, 00:20:34.940 "base_bdevs_list": [ 00:20:34.940 { 00:20:34.940 "name": "BaseBdev1", 00:20:34.940 "uuid": "d6889535-1537-4ac4-b6bb-6e69671f6c98", 00:20:34.940 "is_configured": true, 00:20:34.940 "data_offset": 2048, 00:20:34.940 "data_size": 63488 00:20:34.940 }, 00:20:34.940 { 00:20:34.940 "name": "BaseBdev2", 00:20:34.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.940 "is_configured": false, 00:20:34.940 "data_offset": 0, 00:20:34.940 "data_size": 0 00:20:34.940 }, 00:20:34.940 { 00:20:34.940 "name": "BaseBdev3", 00:20:34.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.940 "is_configured": false, 00:20:34.940 "data_offset": 0, 00:20:34.940 "data_size": 0 00:20:34.940 } 00:20:34.940 ] 00:20:34.940 }' 00:20:34.940 04:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:34.940 04:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.504 04:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:35.761 [2024-07-23 04:15:44.385972] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:35.761 BaseBdev2 00:20:35.761 04:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:35.761 04:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:35.761 04:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:35.761 04:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:35.761 04:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:35.761 04:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:35.761 04:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:36.018 04:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:36.276 [ 00:20:36.276 { 00:20:36.276 "name": "BaseBdev2", 00:20:36.276 "aliases": [ 00:20:36.276 "fe2a7314-f038-4ec7-9231-68bb9b4a2e9c" 00:20:36.276 ], 00:20:36.276 "product_name": "Malloc disk", 00:20:36.276 "block_size": 512, 00:20:36.276 "num_blocks": 65536, 00:20:36.276 "uuid": "fe2a7314-f038-4ec7-9231-68bb9b4a2e9c", 00:20:36.276 "assigned_rate_limits": { 00:20:36.276 "rw_ios_per_sec": 0, 00:20:36.276 "rw_mbytes_per_sec": 0, 00:20:36.276 "r_mbytes_per_sec": 0, 00:20:36.276 "w_mbytes_per_sec": 0 00:20:36.276 }, 00:20:36.276 "claimed": true, 00:20:36.276 "claim_type": "exclusive_write", 00:20:36.276 "zoned": false, 00:20:36.276 "supported_io_types": { 00:20:36.276 "read": true, 00:20:36.276 "write": true, 00:20:36.276 "unmap": true, 00:20:36.276 "flush": true, 00:20:36.276 "reset": true, 00:20:36.276 "nvme_admin": false, 00:20:36.276 "nvme_io": false, 00:20:36.276 "nvme_io_md": false, 00:20:36.276 "write_zeroes": true, 00:20:36.276 "zcopy": true, 00:20:36.276 "get_zone_info": false, 00:20:36.276 "zone_management": false, 00:20:36.276 "zone_append": false, 00:20:36.276 "compare": false, 00:20:36.276 "compare_and_write": false, 00:20:36.276 "abort": true, 00:20:36.276 "seek_hole": false, 00:20:36.276 "seek_data": false, 00:20:36.276 "copy": true, 00:20:36.276 "nvme_iov_md": false 00:20:36.276 }, 00:20:36.276 "memory_domains": [ 00:20:36.276 { 00:20:36.276 "dma_device_id": "system", 00:20:36.276 "dma_device_type": 1 00:20:36.276 }, 00:20:36.276 { 00:20:36.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.276 "dma_device_type": 2 00:20:36.276 } 00:20:36.276 ], 00:20:36.276 "driver_specific": {} 00:20:36.276 } 00:20:36.276 ] 00:20:36.276 04:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:36.276 04:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:36.276 04:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:36.276 04:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:36.276 04:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:36.276 04:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:36.276 04:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:36.276 04:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:36.276 04:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:36.276 04:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:36.276 04:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:36.276 04:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:36.276 04:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:36.276 04:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:36.276 04:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.533 04:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:36.533 "name": "Existed_Raid", 00:20:36.533 "uuid": "852685e9-b796-4428-8d51-9ab1bec89837", 00:20:36.533 "strip_size_kb": 0, 00:20:36.533 "state": "configuring", 00:20:36.533 "raid_level": "raid1", 00:20:36.533 "superblock": true, 00:20:36.533 "num_base_bdevs": 3, 00:20:36.533 "num_base_bdevs_discovered": 2, 00:20:36.533 "num_base_bdevs_operational": 3, 00:20:36.533 "base_bdevs_list": [ 00:20:36.533 { 00:20:36.533 "name": "BaseBdev1", 00:20:36.533 "uuid": "d6889535-1537-4ac4-b6bb-6e69671f6c98", 00:20:36.533 "is_configured": true, 00:20:36.533 "data_offset": 2048, 00:20:36.533 "data_size": 63488 00:20:36.533 }, 00:20:36.533 { 00:20:36.533 "name": "BaseBdev2", 00:20:36.533 "uuid": "fe2a7314-f038-4ec7-9231-68bb9b4a2e9c", 00:20:36.533 "is_configured": true, 00:20:36.533 "data_offset": 2048, 00:20:36.533 "data_size": 63488 00:20:36.533 }, 00:20:36.533 { 00:20:36.533 "name": "BaseBdev3", 00:20:36.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.533 "is_configured": false, 00:20:36.533 "data_offset": 0, 00:20:36.533 "data_size": 0 00:20:36.533 } 00:20:36.533 ] 00:20:36.533 }' 00:20:36.533 04:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:36.533 04:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.098 04:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:37.355 [2024-07-23 04:15:45.902279] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:37.355 [2024-07-23 04:15:45.902567] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:20:37.355 [2024-07-23 04:15:45.902596] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:37.355 [2024-07-23 04:15:45.902920] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:20:37.355 [2024-07-23 04:15:45.903190] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:20:37.355 [2024-07-23 04:15:45.903210] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:20:37.355 [2024-07-23 04:15:45.903405] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.355 BaseBdev3 00:20:37.355 04:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:37.355 04:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:37.355 04:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:37.355 04:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:37.355 04:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:37.355 04:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:37.355 04:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:37.612 04:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:37.612 [ 00:20:37.612 { 00:20:37.612 "name": "BaseBdev3", 00:20:37.612 "aliases": [ 00:20:37.612 "8a821708-2706-492b-aa0d-0d09d4ff0487" 00:20:37.612 ], 00:20:37.612 "product_name": "Malloc disk", 00:20:37.613 "block_size": 512, 00:20:37.613 "num_blocks": 65536, 00:20:37.613 "uuid": "8a821708-2706-492b-aa0d-0d09d4ff0487", 00:20:37.613 "assigned_rate_limits": { 00:20:37.613 "rw_ios_per_sec": 0, 00:20:37.613 "rw_mbytes_per_sec": 0, 00:20:37.613 "r_mbytes_per_sec": 0, 00:20:37.613 "w_mbytes_per_sec": 0 00:20:37.613 }, 00:20:37.613 "claimed": true, 00:20:37.613 "claim_type": "exclusive_write", 00:20:37.613 "zoned": false, 00:20:37.613 "supported_io_types": { 00:20:37.613 "read": true, 00:20:37.613 "write": true, 00:20:37.613 "unmap": true, 00:20:37.613 "flush": true, 00:20:37.613 "reset": true, 00:20:37.613 "nvme_admin": false, 00:20:37.613 "nvme_io": false, 00:20:37.613 "nvme_io_md": false, 00:20:37.613 "write_zeroes": true, 00:20:37.613 "zcopy": true, 00:20:37.613 "get_zone_info": false, 00:20:37.613 "zone_management": false, 00:20:37.613 "zone_append": false, 00:20:37.613 "compare": false, 00:20:37.613 "compare_and_write": false, 00:20:37.613 "abort": true, 00:20:37.613 "seek_hole": false, 00:20:37.613 "seek_data": false, 00:20:37.613 "copy": true, 00:20:37.613 "nvme_iov_md": false 00:20:37.613 }, 00:20:37.613 "memory_domains": [ 00:20:37.613 { 00:20:37.613 "dma_device_id": "system", 00:20:37.613 "dma_device_type": 1 00:20:37.613 }, 00:20:37.613 { 00:20:37.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.613 "dma_device_type": 2 00:20:37.613 } 00:20:37.613 ], 00:20:37.613 "driver_specific": {} 00:20:37.613 } 00:20:37.613 ] 00:20:37.613 04:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:37.613 04:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:37.613 04:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:37.613 04:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:37.613 04:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:37.613 04:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:37.613 04:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:37.613 04:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:37.613 04:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:37.613 04:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:37.613 04:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:37.613 04:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:37.613 04:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:37.613 04:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.613 04:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.870 04:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:37.870 "name": "Existed_Raid", 00:20:37.870 "uuid": "852685e9-b796-4428-8d51-9ab1bec89837", 00:20:37.870 "strip_size_kb": 0, 00:20:37.870 "state": "online", 00:20:37.870 "raid_level": "raid1", 00:20:37.870 "superblock": true, 00:20:37.870 "num_base_bdevs": 3, 00:20:37.870 "num_base_bdevs_discovered": 3, 00:20:37.870 "num_base_bdevs_operational": 3, 00:20:37.870 "base_bdevs_list": [ 00:20:37.870 { 00:20:37.870 "name": "BaseBdev1", 00:20:37.870 "uuid": "d6889535-1537-4ac4-b6bb-6e69671f6c98", 00:20:37.870 "is_configured": true, 00:20:37.870 "data_offset": 2048, 00:20:37.870 "data_size": 63488 00:20:37.870 }, 00:20:37.870 { 00:20:37.870 "name": "BaseBdev2", 00:20:37.870 "uuid": "fe2a7314-f038-4ec7-9231-68bb9b4a2e9c", 00:20:37.870 "is_configured": true, 00:20:37.870 "data_offset": 2048, 00:20:37.870 "data_size": 63488 00:20:37.870 }, 00:20:37.870 { 00:20:37.870 "name": "BaseBdev3", 00:20:37.870 "uuid": "8a821708-2706-492b-aa0d-0d09d4ff0487", 00:20:37.870 "is_configured": true, 00:20:37.870 "data_offset": 2048, 00:20:37.870 "data_size": 63488 00:20:37.870 } 00:20:37.870 ] 00:20:37.870 }' 00:20:37.870 04:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:37.870 04:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.435 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:38.435 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:38.435 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:38.435 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:38.435 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:38.435 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:38.435 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:38.435 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:38.693 [2024-07-23 04:15:47.362788] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:38.693 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:38.693 "name": "Existed_Raid", 00:20:38.693 "aliases": [ 00:20:38.693 "852685e9-b796-4428-8d51-9ab1bec89837" 00:20:38.693 ], 00:20:38.693 "product_name": "Raid Volume", 00:20:38.693 "block_size": 512, 00:20:38.693 "num_blocks": 63488, 00:20:38.693 "uuid": "852685e9-b796-4428-8d51-9ab1bec89837", 00:20:38.694 "assigned_rate_limits": { 00:20:38.694 "rw_ios_per_sec": 0, 00:20:38.694 "rw_mbytes_per_sec": 0, 00:20:38.694 "r_mbytes_per_sec": 0, 00:20:38.694 "w_mbytes_per_sec": 0 00:20:38.694 }, 00:20:38.694 "claimed": false, 00:20:38.694 "zoned": false, 00:20:38.694 "supported_io_types": { 00:20:38.694 "read": true, 00:20:38.694 "write": true, 00:20:38.694 "unmap": false, 00:20:38.694 "flush": false, 00:20:38.694 "reset": true, 00:20:38.694 "nvme_admin": false, 00:20:38.694 "nvme_io": false, 00:20:38.694 "nvme_io_md": false, 00:20:38.694 "write_zeroes": true, 00:20:38.694 "zcopy": false, 00:20:38.694 "get_zone_info": false, 00:20:38.694 "zone_management": false, 00:20:38.694 "zone_append": false, 00:20:38.694 "compare": false, 00:20:38.694 "compare_and_write": false, 00:20:38.694 "abort": false, 00:20:38.694 "seek_hole": false, 00:20:38.694 "seek_data": false, 00:20:38.694 "copy": false, 00:20:38.694 "nvme_iov_md": false 00:20:38.694 }, 00:20:38.694 "memory_domains": [ 00:20:38.694 { 00:20:38.694 "dma_device_id": "system", 00:20:38.694 "dma_device_type": 1 00:20:38.694 }, 00:20:38.694 { 00:20:38.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.694 "dma_device_type": 2 00:20:38.694 }, 00:20:38.694 { 00:20:38.694 "dma_device_id": "system", 00:20:38.694 "dma_device_type": 1 00:20:38.694 }, 00:20:38.694 { 00:20:38.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.694 "dma_device_type": 2 00:20:38.694 }, 00:20:38.694 { 00:20:38.694 "dma_device_id": "system", 00:20:38.694 "dma_device_type": 1 00:20:38.694 }, 00:20:38.694 { 00:20:38.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.694 "dma_device_type": 2 00:20:38.694 } 00:20:38.694 ], 00:20:38.694 "driver_specific": { 00:20:38.694 "raid": { 00:20:38.694 "uuid": "852685e9-b796-4428-8d51-9ab1bec89837", 00:20:38.694 "strip_size_kb": 0, 00:20:38.694 "state": "online", 00:20:38.694 "raid_level": "raid1", 00:20:38.694 "superblock": true, 00:20:38.694 "num_base_bdevs": 3, 00:20:38.694 "num_base_bdevs_discovered": 3, 00:20:38.694 "num_base_bdevs_operational": 3, 00:20:38.694 "base_bdevs_list": [ 00:20:38.694 { 00:20:38.694 "name": "BaseBdev1", 00:20:38.694 "uuid": "d6889535-1537-4ac4-b6bb-6e69671f6c98", 00:20:38.694 "is_configured": true, 00:20:38.694 "data_offset": 2048, 00:20:38.694 "data_size": 63488 00:20:38.694 }, 00:20:38.694 { 00:20:38.694 "name": "BaseBdev2", 00:20:38.694 "uuid": "fe2a7314-f038-4ec7-9231-68bb9b4a2e9c", 00:20:38.694 "is_configured": true, 00:20:38.694 "data_offset": 2048, 00:20:38.694 "data_size": 63488 00:20:38.694 }, 00:20:38.694 { 00:20:38.694 "name": "BaseBdev3", 00:20:38.694 "uuid": "8a821708-2706-492b-aa0d-0d09d4ff0487", 00:20:38.694 "is_configured": true, 00:20:38.694 "data_offset": 2048, 00:20:38.694 "data_size": 63488 00:20:38.694 } 00:20:38.694 ] 00:20:38.694 } 00:20:38.694 } 00:20:38.694 }' 00:20:38.694 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:38.694 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:38.694 BaseBdev2 00:20:38.694 BaseBdev3' 00:20:38.694 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:38.694 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:38.694 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:38.952 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:38.952 "name": "BaseBdev1", 00:20:38.952 "aliases": [ 00:20:38.952 "d6889535-1537-4ac4-b6bb-6e69671f6c98" 00:20:38.952 ], 00:20:38.952 "product_name": "Malloc disk", 00:20:38.952 "block_size": 512, 00:20:38.952 "num_blocks": 65536, 00:20:38.952 "uuid": "d6889535-1537-4ac4-b6bb-6e69671f6c98", 00:20:38.952 "assigned_rate_limits": { 00:20:38.952 "rw_ios_per_sec": 0, 00:20:38.952 "rw_mbytes_per_sec": 0, 00:20:38.952 "r_mbytes_per_sec": 0, 00:20:38.952 "w_mbytes_per_sec": 0 00:20:38.952 }, 00:20:38.952 "claimed": true, 00:20:38.952 "claim_type": "exclusive_write", 00:20:38.952 "zoned": false, 00:20:38.952 "supported_io_types": { 00:20:38.952 "read": true, 00:20:38.952 "write": true, 00:20:38.952 "unmap": true, 00:20:38.952 "flush": true, 00:20:38.952 "reset": true, 00:20:38.952 "nvme_admin": false, 00:20:38.952 "nvme_io": false, 00:20:38.952 "nvme_io_md": false, 00:20:38.952 "write_zeroes": true, 00:20:38.952 "zcopy": true, 00:20:38.952 "get_zone_info": false, 00:20:38.952 "zone_management": false, 00:20:38.952 "zone_append": false, 00:20:38.952 "compare": false, 00:20:38.952 "compare_and_write": false, 00:20:38.952 "abort": true, 00:20:38.952 "seek_hole": false, 00:20:38.952 "seek_data": false, 00:20:38.952 "copy": true, 00:20:38.952 "nvme_iov_md": false 00:20:38.952 }, 00:20:38.952 "memory_domains": [ 00:20:38.952 { 00:20:38.952 "dma_device_id": "system", 00:20:38.952 "dma_device_type": 1 00:20:38.952 }, 00:20:38.952 { 00:20:38.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.952 "dma_device_type": 2 00:20:38.952 } 00:20:38.952 ], 00:20:38.952 "driver_specific": {} 00:20:38.952 }' 00:20:38.952 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:38.952 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:38.952 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:38.952 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:39.209 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:39.209 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:39.209 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:39.209 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:39.209 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:39.209 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:39.209 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:39.467 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:39.467 04:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:39.467 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:39.467 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:39.467 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:39.467 "name": "BaseBdev2", 00:20:39.467 "aliases": [ 00:20:39.467 "fe2a7314-f038-4ec7-9231-68bb9b4a2e9c" 00:20:39.467 ], 00:20:39.467 "product_name": "Malloc disk", 00:20:39.467 "block_size": 512, 00:20:39.467 "num_blocks": 65536, 00:20:39.467 "uuid": "fe2a7314-f038-4ec7-9231-68bb9b4a2e9c", 00:20:39.467 "assigned_rate_limits": { 00:20:39.468 "rw_ios_per_sec": 0, 00:20:39.468 "rw_mbytes_per_sec": 0, 00:20:39.468 "r_mbytes_per_sec": 0, 00:20:39.468 "w_mbytes_per_sec": 0 00:20:39.468 }, 00:20:39.468 "claimed": true, 00:20:39.468 "claim_type": "exclusive_write", 00:20:39.468 "zoned": false, 00:20:39.468 "supported_io_types": { 00:20:39.468 "read": true, 00:20:39.468 "write": true, 00:20:39.468 "unmap": true, 00:20:39.468 "flush": true, 00:20:39.468 "reset": true, 00:20:39.468 "nvme_admin": false, 00:20:39.468 "nvme_io": false, 00:20:39.468 "nvme_io_md": false, 00:20:39.468 "write_zeroes": true, 00:20:39.468 "zcopy": true, 00:20:39.468 "get_zone_info": false, 00:20:39.468 "zone_management": false, 00:20:39.468 "zone_append": false, 00:20:39.468 "compare": false, 00:20:39.468 "compare_and_write": false, 00:20:39.468 "abort": true, 00:20:39.468 "seek_hole": false, 00:20:39.468 "seek_data": false, 00:20:39.468 "copy": true, 00:20:39.468 "nvme_iov_md": false 00:20:39.468 }, 00:20:39.468 "memory_domains": [ 00:20:39.468 { 00:20:39.468 "dma_device_id": "system", 00:20:39.468 "dma_device_type": 1 00:20:39.468 }, 00:20:39.468 { 00:20:39.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.468 "dma_device_type": 2 00:20:39.468 } 00:20:39.468 ], 00:20:39.468 "driver_specific": {} 00:20:39.468 }' 00:20:39.468 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:39.726 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:39.726 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:39.726 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:39.726 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:39.726 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:39.726 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:39.726 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:39.726 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:39.726 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:39.985 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:39.985 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:39.985 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:39.985 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:39.985 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:40.243 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:40.243 "name": "BaseBdev3", 00:20:40.243 "aliases": [ 00:20:40.243 "8a821708-2706-492b-aa0d-0d09d4ff0487" 00:20:40.243 ], 00:20:40.243 "product_name": "Malloc disk", 00:20:40.243 "block_size": 512, 00:20:40.243 "num_blocks": 65536, 00:20:40.243 "uuid": "8a821708-2706-492b-aa0d-0d09d4ff0487", 00:20:40.243 "assigned_rate_limits": { 00:20:40.243 "rw_ios_per_sec": 0, 00:20:40.243 "rw_mbytes_per_sec": 0, 00:20:40.243 "r_mbytes_per_sec": 0, 00:20:40.243 "w_mbytes_per_sec": 0 00:20:40.243 }, 00:20:40.243 "claimed": true, 00:20:40.243 "claim_type": "exclusive_write", 00:20:40.243 "zoned": false, 00:20:40.243 "supported_io_types": { 00:20:40.243 "read": true, 00:20:40.243 "write": true, 00:20:40.243 "unmap": true, 00:20:40.243 "flush": true, 00:20:40.243 "reset": true, 00:20:40.243 "nvme_admin": false, 00:20:40.243 "nvme_io": false, 00:20:40.243 "nvme_io_md": false, 00:20:40.243 "write_zeroes": true, 00:20:40.243 "zcopy": true, 00:20:40.243 "get_zone_info": false, 00:20:40.243 "zone_management": false, 00:20:40.243 "zone_append": false, 00:20:40.243 "compare": false, 00:20:40.243 "compare_and_write": false, 00:20:40.243 "abort": true, 00:20:40.243 "seek_hole": false, 00:20:40.243 "seek_data": false, 00:20:40.243 "copy": true, 00:20:40.243 "nvme_iov_md": false 00:20:40.243 }, 00:20:40.243 "memory_domains": [ 00:20:40.243 { 00:20:40.243 "dma_device_id": "system", 00:20:40.243 "dma_device_type": 1 00:20:40.243 }, 00:20:40.243 { 00:20:40.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.243 "dma_device_type": 2 00:20:40.243 } 00:20:40.243 ], 00:20:40.243 "driver_specific": {} 00:20:40.243 }' 00:20:40.243 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.243 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.243 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:40.243 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:40.243 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:40.243 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:40.243 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:40.243 04:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:40.243 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:40.243 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:40.503 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:40.503 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:40.503 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:40.761 [2024-07-23 04:15:49.311810] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:40.761 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:40.761 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:20:40.761 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:40.761 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:20:40.761 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:40.761 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:40.761 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:40.761 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:40.761 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:40.761 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:40.761 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:40.761 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:40.761 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:40.761 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:40.761 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:40.761 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.761 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.020 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:41.020 "name": "Existed_Raid", 00:20:41.020 "uuid": "852685e9-b796-4428-8d51-9ab1bec89837", 00:20:41.020 "strip_size_kb": 0, 00:20:41.020 "state": "online", 00:20:41.020 "raid_level": "raid1", 00:20:41.020 "superblock": true, 00:20:41.020 "num_base_bdevs": 3, 00:20:41.020 "num_base_bdevs_discovered": 2, 00:20:41.020 "num_base_bdevs_operational": 2, 00:20:41.020 "base_bdevs_list": [ 00:20:41.020 { 00:20:41.020 "name": null, 00:20:41.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.020 "is_configured": false, 00:20:41.020 "data_offset": 2048, 00:20:41.020 "data_size": 63488 00:20:41.020 }, 00:20:41.020 { 00:20:41.020 "name": "BaseBdev2", 00:20:41.020 "uuid": "fe2a7314-f038-4ec7-9231-68bb9b4a2e9c", 00:20:41.020 "is_configured": true, 00:20:41.020 "data_offset": 2048, 00:20:41.020 "data_size": 63488 00:20:41.020 }, 00:20:41.020 { 00:20:41.020 "name": "BaseBdev3", 00:20:41.020 "uuid": "8a821708-2706-492b-aa0d-0d09d4ff0487", 00:20:41.020 "is_configured": true, 00:20:41.020 "data_offset": 2048, 00:20:41.020 "data_size": 63488 00:20:41.020 } 00:20:41.020 ] 00:20:41.020 }' 00:20:41.020 04:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:41.020 04:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.587 04:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:41.587 04:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:41.587 04:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.587 04:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:41.845 04:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:41.845 04:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:41.845 04:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:41.845 [2024-07-23 04:15:50.609591] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:42.102 04:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:42.102 04:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:42.102 04:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.102 04:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:42.361 04:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:42.361 04:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:42.361 04:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:42.619 [2024-07-23 04:15:51.194870] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:42.619 [2024-07-23 04:15:51.195020] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:42.619 [2024-07-23 04:15:51.324782] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:42.619 [2024-07-23 04:15:51.324846] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:42.619 [2024-07-23 04:15:51.324866] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:20:42.619 04:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:42.619 04:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:42.619 04:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.619 04:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:42.878 04:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:42.878 04:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:42.878 04:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:42.878 04:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:42.878 04:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:42.878 04:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:43.136 BaseBdev2 00:20:43.136 04:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:43.136 04:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:43.136 04:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:43.136 04:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:43.136 04:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:43.136 04:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:43.136 04:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:43.395 04:15:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:43.653 [ 00:20:43.653 { 00:20:43.653 "name": "BaseBdev2", 00:20:43.653 "aliases": [ 00:20:43.653 "12ffabc9-833a-4a5f-ab46-6bf5b09ea9e6" 00:20:43.653 ], 00:20:43.653 "product_name": "Malloc disk", 00:20:43.653 "block_size": 512, 00:20:43.653 "num_blocks": 65536, 00:20:43.653 "uuid": "12ffabc9-833a-4a5f-ab46-6bf5b09ea9e6", 00:20:43.653 "assigned_rate_limits": { 00:20:43.653 "rw_ios_per_sec": 0, 00:20:43.653 "rw_mbytes_per_sec": 0, 00:20:43.653 "r_mbytes_per_sec": 0, 00:20:43.653 "w_mbytes_per_sec": 0 00:20:43.653 }, 00:20:43.653 "claimed": false, 00:20:43.653 "zoned": false, 00:20:43.653 "supported_io_types": { 00:20:43.653 "read": true, 00:20:43.653 "write": true, 00:20:43.653 "unmap": true, 00:20:43.653 "flush": true, 00:20:43.653 "reset": true, 00:20:43.653 "nvme_admin": false, 00:20:43.653 "nvme_io": false, 00:20:43.653 "nvme_io_md": false, 00:20:43.653 "write_zeroes": true, 00:20:43.653 "zcopy": true, 00:20:43.653 "get_zone_info": false, 00:20:43.653 "zone_management": false, 00:20:43.653 "zone_append": false, 00:20:43.653 "compare": false, 00:20:43.653 "compare_and_write": false, 00:20:43.653 "abort": true, 00:20:43.653 "seek_hole": false, 00:20:43.653 "seek_data": false, 00:20:43.653 "copy": true, 00:20:43.653 "nvme_iov_md": false 00:20:43.653 }, 00:20:43.653 "memory_domains": [ 00:20:43.653 { 00:20:43.653 "dma_device_id": "system", 00:20:43.653 "dma_device_type": 1 00:20:43.653 }, 00:20:43.653 { 00:20:43.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.653 "dma_device_type": 2 00:20:43.653 } 00:20:43.653 ], 00:20:43.653 "driver_specific": {} 00:20:43.653 } 00:20:43.653 ] 00:20:43.653 04:15:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:43.653 04:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:43.653 04:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:43.653 04:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:43.912 BaseBdev3 00:20:43.912 04:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:43.912 04:15:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:43.912 04:15:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:43.912 04:15:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:43.912 04:15:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:43.912 04:15:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:43.912 04:15:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:44.170 04:15:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:44.428 [ 00:20:44.428 { 00:20:44.428 "name": "BaseBdev3", 00:20:44.428 "aliases": [ 00:20:44.428 "57a03b27-16da-453a-b935-7253b0fef68d" 00:20:44.428 ], 00:20:44.428 "product_name": "Malloc disk", 00:20:44.428 "block_size": 512, 00:20:44.428 "num_blocks": 65536, 00:20:44.428 "uuid": "57a03b27-16da-453a-b935-7253b0fef68d", 00:20:44.428 "assigned_rate_limits": { 00:20:44.428 "rw_ios_per_sec": 0, 00:20:44.428 "rw_mbytes_per_sec": 0, 00:20:44.428 "r_mbytes_per_sec": 0, 00:20:44.428 "w_mbytes_per_sec": 0 00:20:44.428 }, 00:20:44.428 "claimed": false, 00:20:44.428 "zoned": false, 00:20:44.428 "supported_io_types": { 00:20:44.428 "read": true, 00:20:44.428 "write": true, 00:20:44.428 "unmap": true, 00:20:44.428 "flush": true, 00:20:44.428 "reset": true, 00:20:44.428 "nvme_admin": false, 00:20:44.428 "nvme_io": false, 00:20:44.428 "nvme_io_md": false, 00:20:44.428 "write_zeroes": true, 00:20:44.428 "zcopy": true, 00:20:44.428 "get_zone_info": false, 00:20:44.428 "zone_management": false, 00:20:44.428 "zone_append": false, 00:20:44.428 "compare": false, 00:20:44.428 "compare_and_write": false, 00:20:44.428 "abort": true, 00:20:44.428 "seek_hole": false, 00:20:44.428 "seek_data": false, 00:20:44.428 "copy": true, 00:20:44.428 "nvme_iov_md": false 00:20:44.428 }, 00:20:44.428 "memory_domains": [ 00:20:44.428 { 00:20:44.428 "dma_device_id": "system", 00:20:44.428 "dma_device_type": 1 00:20:44.428 }, 00:20:44.428 { 00:20:44.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.428 "dma_device_type": 2 00:20:44.428 } 00:20:44.428 ], 00:20:44.428 "driver_specific": {} 00:20:44.428 } 00:20:44.428 ] 00:20:44.428 04:15:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:44.428 04:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:44.428 04:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:44.428 04:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:44.428 [2024-07-23 04:15:53.207320] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:44.428 [2024-07-23 04:15:53.207374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:44.428 [2024-07-23 04:15:53.207407] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:44.428 [2024-07-23 04:15:53.209956] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:44.693 04:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:44.693 04:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:44.693 04:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:44.693 04:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:44.693 04:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:44.693 04:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:44.693 04:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:44.693 04:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:44.693 04:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:44.693 04:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:44.693 04:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.693 04:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:44.693 04:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:44.693 "name": "Existed_Raid", 00:20:44.693 "uuid": "c97ec33e-5f16-4963-9a22-e8c96379cc68", 00:20:44.693 "strip_size_kb": 0, 00:20:44.693 "state": "configuring", 00:20:44.693 "raid_level": "raid1", 00:20:44.693 "superblock": true, 00:20:44.693 "num_base_bdevs": 3, 00:20:44.693 "num_base_bdevs_discovered": 2, 00:20:44.693 "num_base_bdevs_operational": 3, 00:20:44.693 "base_bdevs_list": [ 00:20:44.693 { 00:20:44.693 "name": "BaseBdev1", 00:20:44.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.693 "is_configured": false, 00:20:44.693 "data_offset": 0, 00:20:44.693 "data_size": 0 00:20:44.693 }, 00:20:44.693 { 00:20:44.693 "name": "BaseBdev2", 00:20:44.693 "uuid": "12ffabc9-833a-4a5f-ab46-6bf5b09ea9e6", 00:20:44.693 "is_configured": true, 00:20:44.693 "data_offset": 2048, 00:20:44.693 "data_size": 63488 00:20:44.693 }, 00:20:44.693 { 00:20:44.693 "name": "BaseBdev3", 00:20:44.693 "uuid": "57a03b27-16da-453a-b935-7253b0fef68d", 00:20:44.693 "is_configured": true, 00:20:44.693 "data_offset": 2048, 00:20:44.693 "data_size": 63488 00:20:44.693 } 00:20:44.693 ] 00:20:44.693 }' 00:20:44.693 04:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:44.693 04:15:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.262 04:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:45.520 [2024-07-23 04:15:54.230016] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:45.520 04:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:45.520 04:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:45.520 04:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:45.520 04:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:45.520 04:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:45.520 04:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:45.520 04:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:45.520 04:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:45.520 04:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:45.520 04:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:45.520 04:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.520 04:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.778 04:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:45.778 "name": "Existed_Raid", 00:20:45.778 "uuid": "c97ec33e-5f16-4963-9a22-e8c96379cc68", 00:20:45.778 "strip_size_kb": 0, 00:20:45.778 "state": "configuring", 00:20:45.778 "raid_level": "raid1", 00:20:45.778 "superblock": true, 00:20:45.778 "num_base_bdevs": 3, 00:20:45.778 "num_base_bdevs_discovered": 1, 00:20:45.778 "num_base_bdevs_operational": 3, 00:20:45.778 "base_bdevs_list": [ 00:20:45.778 { 00:20:45.778 "name": "BaseBdev1", 00:20:45.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.778 "is_configured": false, 00:20:45.778 "data_offset": 0, 00:20:45.778 "data_size": 0 00:20:45.778 }, 00:20:45.778 { 00:20:45.778 "name": null, 00:20:45.778 "uuid": "12ffabc9-833a-4a5f-ab46-6bf5b09ea9e6", 00:20:45.778 "is_configured": false, 00:20:45.778 "data_offset": 2048, 00:20:45.779 "data_size": 63488 00:20:45.779 }, 00:20:45.779 { 00:20:45.779 "name": "BaseBdev3", 00:20:45.779 "uuid": "57a03b27-16da-453a-b935-7253b0fef68d", 00:20:45.779 "is_configured": true, 00:20:45.779 "data_offset": 2048, 00:20:45.779 "data_size": 63488 00:20:45.779 } 00:20:45.779 ] 00:20:45.779 }' 00:20:45.779 04:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:45.779 04:15:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.345 04:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.345 04:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:46.602 04:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:46.603 04:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:46.861 [2024-07-23 04:15:55.547524] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:46.861 BaseBdev1 00:20:46.861 04:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:46.861 04:15:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:46.861 04:15:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:46.861 04:15:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:46.861 04:15:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:46.861 04:15:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:46.861 04:15:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:47.120 04:15:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:47.379 [ 00:20:47.379 { 00:20:47.379 "name": "BaseBdev1", 00:20:47.379 "aliases": [ 00:20:47.379 "94cfd4f3-08cc-4775-9550-d278cacac2d0" 00:20:47.379 ], 00:20:47.379 "product_name": "Malloc disk", 00:20:47.379 "block_size": 512, 00:20:47.379 "num_blocks": 65536, 00:20:47.379 "uuid": "94cfd4f3-08cc-4775-9550-d278cacac2d0", 00:20:47.379 "assigned_rate_limits": { 00:20:47.379 "rw_ios_per_sec": 0, 00:20:47.379 "rw_mbytes_per_sec": 0, 00:20:47.379 "r_mbytes_per_sec": 0, 00:20:47.379 "w_mbytes_per_sec": 0 00:20:47.379 }, 00:20:47.379 "claimed": true, 00:20:47.379 "claim_type": "exclusive_write", 00:20:47.379 "zoned": false, 00:20:47.379 "supported_io_types": { 00:20:47.379 "read": true, 00:20:47.379 "write": true, 00:20:47.379 "unmap": true, 00:20:47.379 "flush": true, 00:20:47.379 "reset": true, 00:20:47.379 "nvme_admin": false, 00:20:47.379 "nvme_io": false, 00:20:47.379 "nvme_io_md": false, 00:20:47.379 "write_zeroes": true, 00:20:47.379 "zcopy": true, 00:20:47.379 "get_zone_info": false, 00:20:47.379 "zone_management": false, 00:20:47.379 "zone_append": false, 00:20:47.379 "compare": false, 00:20:47.379 "compare_and_write": false, 00:20:47.379 "abort": true, 00:20:47.379 "seek_hole": false, 00:20:47.379 "seek_data": false, 00:20:47.379 "copy": true, 00:20:47.379 "nvme_iov_md": false 00:20:47.379 }, 00:20:47.379 "memory_domains": [ 00:20:47.379 { 00:20:47.379 "dma_device_id": "system", 00:20:47.379 "dma_device_type": 1 00:20:47.379 }, 00:20:47.379 { 00:20:47.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.379 "dma_device_type": 2 00:20:47.379 } 00:20:47.379 ], 00:20:47.379 "driver_specific": {} 00:20:47.379 } 00:20:47.379 ] 00:20:47.379 04:15:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:47.379 04:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:47.379 04:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:47.379 04:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:47.379 04:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:47.379 04:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:47.379 04:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:47.379 04:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:47.379 04:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:47.379 04:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:47.379 04:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:47.379 04:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.379 04:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:47.638 04:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:47.638 "name": "Existed_Raid", 00:20:47.638 "uuid": "c97ec33e-5f16-4963-9a22-e8c96379cc68", 00:20:47.638 "strip_size_kb": 0, 00:20:47.638 "state": "configuring", 00:20:47.638 "raid_level": "raid1", 00:20:47.638 "superblock": true, 00:20:47.638 "num_base_bdevs": 3, 00:20:47.638 "num_base_bdevs_discovered": 2, 00:20:47.638 "num_base_bdevs_operational": 3, 00:20:47.638 "base_bdevs_list": [ 00:20:47.638 { 00:20:47.639 "name": "BaseBdev1", 00:20:47.639 "uuid": "94cfd4f3-08cc-4775-9550-d278cacac2d0", 00:20:47.639 "is_configured": true, 00:20:47.639 "data_offset": 2048, 00:20:47.639 "data_size": 63488 00:20:47.639 }, 00:20:47.639 { 00:20:47.639 "name": null, 00:20:47.639 "uuid": "12ffabc9-833a-4a5f-ab46-6bf5b09ea9e6", 00:20:47.639 "is_configured": false, 00:20:47.639 "data_offset": 2048, 00:20:47.639 "data_size": 63488 00:20:47.639 }, 00:20:47.639 { 00:20:47.639 "name": "BaseBdev3", 00:20:47.639 "uuid": "57a03b27-16da-453a-b935-7253b0fef68d", 00:20:47.639 "is_configured": true, 00:20:47.639 "data_offset": 2048, 00:20:47.639 "data_size": 63488 00:20:47.639 } 00:20:47.639 ] 00:20:47.639 }' 00:20:47.639 04:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:47.639 04:15:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.206 04:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:48.206 04:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.465 04:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:48.465 04:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:48.465 [2024-07-23 04:15:57.244246] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:48.724 04:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:48.724 04:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:48.724 04:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:48.724 04:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:48.724 04:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:48.724 04:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:48.724 04:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:48.724 04:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:48.724 04:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:48.724 04:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:48.724 04:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.724 04:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:48.724 04:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:48.724 "name": "Existed_Raid", 00:20:48.724 "uuid": "c97ec33e-5f16-4963-9a22-e8c96379cc68", 00:20:48.724 "strip_size_kb": 0, 00:20:48.724 "state": "configuring", 00:20:48.724 "raid_level": "raid1", 00:20:48.724 "superblock": true, 00:20:48.724 "num_base_bdevs": 3, 00:20:48.724 "num_base_bdevs_discovered": 1, 00:20:48.724 "num_base_bdevs_operational": 3, 00:20:48.724 "base_bdevs_list": [ 00:20:48.724 { 00:20:48.724 "name": "BaseBdev1", 00:20:48.724 "uuid": "94cfd4f3-08cc-4775-9550-d278cacac2d0", 00:20:48.724 "is_configured": true, 00:20:48.724 "data_offset": 2048, 00:20:48.724 "data_size": 63488 00:20:48.724 }, 00:20:48.724 { 00:20:48.724 "name": null, 00:20:48.724 "uuid": "12ffabc9-833a-4a5f-ab46-6bf5b09ea9e6", 00:20:48.724 "is_configured": false, 00:20:48.724 "data_offset": 2048, 00:20:48.724 "data_size": 63488 00:20:48.724 }, 00:20:48.724 { 00:20:48.724 "name": null, 00:20:48.724 "uuid": "57a03b27-16da-453a-b935-7253b0fef68d", 00:20:48.724 "is_configured": false, 00:20:48.724 "data_offset": 2048, 00:20:48.724 "data_size": 63488 00:20:48.724 } 00:20:48.724 ] 00:20:48.724 }' 00:20:48.724 04:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:48.724 04:15:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.660 04:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:49.660 04:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.660 04:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:49.660 04:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:49.919 [2024-07-23 04:15:58.519700] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:49.919 04:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:49.919 04:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:49.919 04:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:49.919 04:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:49.919 04:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:49.919 04:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:49.919 04:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:49.919 04:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:49.919 04:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:49.919 04:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:49.919 04:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.919 04:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:50.178 04:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:50.179 "name": "Existed_Raid", 00:20:50.179 "uuid": "c97ec33e-5f16-4963-9a22-e8c96379cc68", 00:20:50.179 "strip_size_kb": 0, 00:20:50.179 "state": "configuring", 00:20:50.179 "raid_level": "raid1", 00:20:50.179 "superblock": true, 00:20:50.179 "num_base_bdevs": 3, 00:20:50.179 "num_base_bdevs_discovered": 2, 00:20:50.179 "num_base_bdevs_operational": 3, 00:20:50.179 "base_bdevs_list": [ 00:20:50.179 { 00:20:50.179 "name": "BaseBdev1", 00:20:50.179 "uuid": "94cfd4f3-08cc-4775-9550-d278cacac2d0", 00:20:50.179 "is_configured": true, 00:20:50.179 "data_offset": 2048, 00:20:50.179 "data_size": 63488 00:20:50.179 }, 00:20:50.179 { 00:20:50.179 "name": null, 00:20:50.179 "uuid": "12ffabc9-833a-4a5f-ab46-6bf5b09ea9e6", 00:20:50.179 "is_configured": false, 00:20:50.179 "data_offset": 2048, 00:20:50.179 "data_size": 63488 00:20:50.179 }, 00:20:50.179 { 00:20:50.179 "name": "BaseBdev3", 00:20:50.179 "uuid": "57a03b27-16da-453a-b935-7253b0fef68d", 00:20:50.179 "is_configured": true, 00:20:50.179 "data_offset": 2048, 00:20:50.179 "data_size": 63488 00:20:50.179 } 00:20:50.179 ] 00:20:50.179 }' 00:20:50.179 04:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:50.179 04:15:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.746 04:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.746 04:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:50.746 04:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:50.746 04:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:51.005 [2024-07-23 04:15:59.734998] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:51.265 04:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:51.265 04:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:51.265 04:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:51.265 04:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:51.265 04:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:51.265 04:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:51.265 04:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:51.265 04:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:51.265 04:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:51.265 04:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:51.265 04:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.265 04:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:51.524 04:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:51.524 "name": "Existed_Raid", 00:20:51.524 "uuid": "c97ec33e-5f16-4963-9a22-e8c96379cc68", 00:20:51.524 "strip_size_kb": 0, 00:20:51.524 "state": "configuring", 00:20:51.524 "raid_level": "raid1", 00:20:51.524 "superblock": true, 00:20:51.524 "num_base_bdevs": 3, 00:20:51.524 "num_base_bdevs_discovered": 1, 00:20:51.524 "num_base_bdevs_operational": 3, 00:20:51.524 "base_bdevs_list": [ 00:20:51.524 { 00:20:51.524 "name": null, 00:20:51.524 "uuid": "94cfd4f3-08cc-4775-9550-d278cacac2d0", 00:20:51.524 "is_configured": false, 00:20:51.524 "data_offset": 2048, 00:20:51.524 "data_size": 63488 00:20:51.524 }, 00:20:51.524 { 00:20:51.524 "name": null, 00:20:51.524 "uuid": "12ffabc9-833a-4a5f-ab46-6bf5b09ea9e6", 00:20:51.524 "is_configured": false, 00:20:51.524 "data_offset": 2048, 00:20:51.524 "data_size": 63488 00:20:51.524 }, 00:20:51.524 { 00:20:51.524 "name": "BaseBdev3", 00:20:51.524 "uuid": "57a03b27-16da-453a-b935-7253b0fef68d", 00:20:51.524 "is_configured": true, 00:20:51.524 "data_offset": 2048, 00:20:51.524 "data_size": 63488 00:20:51.524 } 00:20:51.524 ] 00:20:51.524 }' 00:20:51.524 04:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:51.524 04:16:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.092 04:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.092 04:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:52.352 04:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:52.352 04:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:52.609 [2024-07-23 04:16:01.138845] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:52.609 04:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:52.609 04:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:52.609 04:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:52.609 04:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:52.609 04:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:52.609 04:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:52.609 04:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:52.609 04:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:52.609 04:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:52.609 04:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:52.609 04:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.609 04:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:52.609 04:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:52.610 "name": "Existed_Raid", 00:20:52.610 "uuid": "c97ec33e-5f16-4963-9a22-e8c96379cc68", 00:20:52.610 "strip_size_kb": 0, 00:20:52.610 "state": "configuring", 00:20:52.610 "raid_level": "raid1", 00:20:52.610 "superblock": true, 00:20:52.610 "num_base_bdevs": 3, 00:20:52.610 "num_base_bdevs_discovered": 2, 00:20:52.610 "num_base_bdevs_operational": 3, 00:20:52.610 "base_bdevs_list": [ 00:20:52.610 { 00:20:52.610 "name": null, 00:20:52.610 "uuid": "94cfd4f3-08cc-4775-9550-d278cacac2d0", 00:20:52.610 "is_configured": false, 00:20:52.610 "data_offset": 2048, 00:20:52.610 "data_size": 63488 00:20:52.610 }, 00:20:52.610 { 00:20:52.610 "name": "BaseBdev2", 00:20:52.610 "uuid": "12ffabc9-833a-4a5f-ab46-6bf5b09ea9e6", 00:20:52.610 "is_configured": true, 00:20:52.610 "data_offset": 2048, 00:20:52.610 "data_size": 63488 00:20:52.610 }, 00:20:52.610 { 00:20:52.610 "name": "BaseBdev3", 00:20:52.610 "uuid": "57a03b27-16da-453a-b935-7253b0fef68d", 00:20:52.610 "is_configured": true, 00:20:52.610 "data_offset": 2048, 00:20:52.610 "data_size": 63488 00:20:52.610 } 00:20:52.610 ] 00:20:52.610 }' 00:20:52.610 04:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:52.610 04:16:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.175 04:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.175 04:16:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:53.435 04:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:53.435 04:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.435 04:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:53.732 04:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 94cfd4f3-08cc-4775-9550-d278cacac2d0 00:20:53.991 [2024-07-23 04:16:02.644728] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:53.991 [2024-07-23 04:16:02.644995] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041780 00:20:53.991 [2024-07-23 04:16:02.645015] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:53.991 [2024-07-23 04:16:02.645349] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:20:53.991 [2024-07-23 04:16:02.645569] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041780 00:20:53.991 [2024-07-23 04:16:02.645587] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000041780 00:20:53.992 NewBaseBdev 00:20:53.992 [2024-07-23 04:16:02.645772] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:53.992 04:16:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:53.992 04:16:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:20:53.992 04:16:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:53.992 04:16:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:53.992 04:16:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:53.992 04:16:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:53.992 04:16:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:54.251 04:16:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:54.510 [ 00:20:54.510 { 00:20:54.510 "name": "NewBaseBdev", 00:20:54.510 "aliases": [ 00:20:54.510 "94cfd4f3-08cc-4775-9550-d278cacac2d0" 00:20:54.510 ], 00:20:54.510 "product_name": "Malloc disk", 00:20:54.510 "block_size": 512, 00:20:54.510 "num_blocks": 65536, 00:20:54.510 "uuid": "94cfd4f3-08cc-4775-9550-d278cacac2d0", 00:20:54.510 "assigned_rate_limits": { 00:20:54.510 "rw_ios_per_sec": 0, 00:20:54.510 "rw_mbytes_per_sec": 0, 00:20:54.510 "r_mbytes_per_sec": 0, 00:20:54.510 "w_mbytes_per_sec": 0 00:20:54.510 }, 00:20:54.510 "claimed": true, 00:20:54.510 "claim_type": "exclusive_write", 00:20:54.510 "zoned": false, 00:20:54.510 "supported_io_types": { 00:20:54.510 "read": true, 00:20:54.510 "write": true, 00:20:54.510 "unmap": true, 00:20:54.510 "flush": true, 00:20:54.510 "reset": true, 00:20:54.510 "nvme_admin": false, 00:20:54.510 "nvme_io": false, 00:20:54.510 "nvme_io_md": false, 00:20:54.510 "write_zeroes": true, 00:20:54.510 "zcopy": true, 00:20:54.510 "get_zone_info": false, 00:20:54.510 "zone_management": false, 00:20:54.510 "zone_append": false, 00:20:54.510 "compare": false, 00:20:54.510 "compare_and_write": false, 00:20:54.510 "abort": true, 00:20:54.510 "seek_hole": false, 00:20:54.510 "seek_data": false, 00:20:54.510 "copy": true, 00:20:54.510 "nvme_iov_md": false 00:20:54.510 }, 00:20:54.510 "memory_domains": [ 00:20:54.510 { 00:20:54.510 "dma_device_id": "system", 00:20:54.510 "dma_device_type": 1 00:20:54.510 }, 00:20:54.511 { 00:20:54.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.511 "dma_device_type": 2 00:20:54.511 } 00:20:54.511 ], 00:20:54.511 "driver_specific": {} 00:20:54.511 } 00:20:54.511 ] 00:20:54.511 04:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:54.511 04:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:54.511 04:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:54.511 04:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:54.511 04:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:54.511 04:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:54.511 04:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:54.511 04:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:54.511 04:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:54.511 04:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:54.511 04:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:54.511 04:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.511 04:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.770 04:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:54.770 "name": "Existed_Raid", 00:20:54.770 "uuid": "c97ec33e-5f16-4963-9a22-e8c96379cc68", 00:20:54.770 "strip_size_kb": 0, 00:20:54.770 "state": "online", 00:20:54.770 "raid_level": "raid1", 00:20:54.770 "superblock": true, 00:20:54.770 "num_base_bdevs": 3, 00:20:54.770 "num_base_bdevs_discovered": 3, 00:20:54.770 "num_base_bdevs_operational": 3, 00:20:54.770 "base_bdevs_list": [ 00:20:54.770 { 00:20:54.770 "name": "NewBaseBdev", 00:20:54.770 "uuid": "94cfd4f3-08cc-4775-9550-d278cacac2d0", 00:20:54.770 "is_configured": true, 00:20:54.770 "data_offset": 2048, 00:20:54.770 "data_size": 63488 00:20:54.770 }, 00:20:54.770 { 00:20:54.770 "name": "BaseBdev2", 00:20:54.770 "uuid": "12ffabc9-833a-4a5f-ab46-6bf5b09ea9e6", 00:20:54.770 "is_configured": true, 00:20:54.770 "data_offset": 2048, 00:20:54.770 "data_size": 63488 00:20:54.770 }, 00:20:54.770 { 00:20:54.770 "name": "BaseBdev3", 00:20:54.770 "uuid": "57a03b27-16da-453a-b935-7253b0fef68d", 00:20:54.770 "is_configured": true, 00:20:54.770 "data_offset": 2048, 00:20:54.770 "data_size": 63488 00:20:54.770 } 00:20:54.770 ] 00:20:54.770 }' 00:20:54.770 04:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:54.770 04:16:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.338 04:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:55.338 04:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:55.338 04:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:55.338 04:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:55.338 04:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:55.338 04:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:55.338 04:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:55.338 04:16:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:55.338 [2024-07-23 04:16:04.093293] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:55.338 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:55.338 "name": "Existed_Raid", 00:20:55.338 "aliases": [ 00:20:55.338 "c97ec33e-5f16-4963-9a22-e8c96379cc68" 00:20:55.338 ], 00:20:55.338 "product_name": "Raid Volume", 00:20:55.338 "block_size": 512, 00:20:55.338 "num_blocks": 63488, 00:20:55.338 "uuid": "c97ec33e-5f16-4963-9a22-e8c96379cc68", 00:20:55.338 "assigned_rate_limits": { 00:20:55.338 "rw_ios_per_sec": 0, 00:20:55.338 "rw_mbytes_per_sec": 0, 00:20:55.338 "r_mbytes_per_sec": 0, 00:20:55.338 "w_mbytes_per_sec": 0 00:20:55.338 }, 00:20:55.338 "claimed": false, 00:20:55.338 "zoned": false, 00:20:55.338 "supported_io_types": { 00:20:55.338 "read": true, 00:20:55.338 "write": true, 00:20:55.338 "unmap": false, 00:20:55.338 "flush": false, 00:20:55.338 "reset": true, 00:20:55.338 "nvme_admin": false, 00:20:55.338 "nvme_io": false, 00:20:55.338 "nvme_io_md": false, 00:20:55.338 "write_zeroes": true, 00:20:55.338 "zcopy": false, 00:20:55.338 "get_zone_info": false, 00:20:55.338 "zone_management": false, 00:20:55.338 "zone_append": false, 00:20:55.338 "compare": false, 00:20:55.338 "compare_and_write": false, 00:20:55.338 "abort": false, 00:20:55.338 "seek_hole": false, 00:20:55.338 "seek_data": false, 00:20:55.338 "copy": false, 00:20:55.338 "nvme_iov_md": false 00:20:55.338 }, 00:20:55.338 "memory_domains": [ 00:20:55.338 { 00:20:55.338 "dma_device_id": "system", 00:20:55.338 "dma_device_type": 1 00:20:55.338 }, 00:20:55.338 { 00:20:55.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.338 "dma_device_type": 2 00:20:55.338 }, 00:20:55.338 { 00:20:55.338 "dma_device_id": "system", 00:20:55.338 "dma_device_type": 1 00:20:55.338 }, 00:20:55.338 { 00:20:55.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.338 "dma_device_type": 2 00:20:55.338 }, 00:20:55.338 { 00:20:55.338 "dma_device_id": "system", 00:20:55.338 "dma_device_type": 1 00:20:55.338 }, 00:20:55.338 { 00:20:55.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.338 "dma_device_type": 2 00:20:55.338 } 00:20:55.338 ], 00:20:55.338 "driver_specific": { 00:20:55.338 "raid": { 00:20:55.338 "uuid": "c97ec33e-5f16-4963-9a22-e8c96379cc68", 00:20:55.338 "strip_size_kb": 0, 00:20:55.338 "state": "online", 00:20:55.338 "raid_level": "raid1", 00:20:55.338 "superblock": true, 00:20:55.338 "num_base_bdevs": 3, 00:20:55.338 "num_base_bdevs_discovered": 3, 00:20:55.338 "num_base_bdevs_operational": 3, 00:20:55.338 "base_bdevs_list": [ 00:20:55.338 { 00:20:55.338 "name": "NewBaseBdev", 00:20:55.338 "uuid": "94cfd4f3-08cc-4775-9550-d278cacac2d0", 00:20:55.338 "is_configured": true, 00:20:55.338 "data_offset": 2048, 00:20:55.338 "data_size": 63488 00:20:55.338 }, 00:20:55.338 { 00:20:55.338 "name": "BaseBdev2", 00:20:55.338 "uuid": "12ffabc9-833a-4a5f-ab46-6bf5b09ea9e6", 00:20:55.338 "is_configured": true, 00:20:55.338 "data_offset": 2048, 00:20:55.338 "data_size": 63488 00:20:55.338 }, 00:20:55.338 { 00:20:55.338 "name": "BaseBdev3", 00:20:55.338 "uuid": "57a03b27-16da-453a-b935-7253b0fef68d", 00:20:55.338 "is_configured": true, 00:20:55.338 "data_offset": 2048, 00:20:55.338 "data_size": 63488 00:20:55.338 } 00:20:55.338 ] 00:20:55.338 } 00:20:55.338 } 00:20:55.338 }' 00:20:55.338 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:55.597 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:55.597 BaseBdev2 00:20:55.597 BaseBdev3' 00:20:55.597 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:55.597 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:55.597 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:55.855 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:55.856 "name": "NewBaseBdev", 00:20:55.856 "aliases": [ 00:20:55.856 "94cfd4f3-08cc-4775-9550-d278cacac2d0" 00:20:55.856 ], 00:20:55.856 "product_name": "Malloc disk", 00:20:55.856 "block_size": 512, 00:20:55.856 "num_blocks": 65536, 00:20:55.856 "uuid": "94cfd4f3-08cc-4775-9550-d278cacac2d0", 00:20:55.856 "assigned_rate_limits": { 00:20:55.856 "rw_ios_per_sec": 0, 00:20:55.856 "rw_mbytes_per_sec": 0, 00:20:55.856 "r_mbytes_per_sec": 0, 00:20:55.856 "w_mbytes_per_sec": 0 00:20:55.856 }, 00:20:55.856 "claimed": true, 00:20:55.856 "claim_type": "exclusive_write", 00:20:55.856 "zoned": false, 00:20:55.856 "supported_io_types": { 00:20:55.856 "read": true, 00:20:55.856 "write": true, 00:20:55.856 "unmap": true, 00:20:55.856 "flush": true, 00:20:55.856 "reset": true, 00:20:55.856 "nvme_admin": false, 00:20:55.856 "nvme_io": false, 00:20:55.856 "nvme_io_md": false, 00:20:55.856 "write_zeroes": true, 00:20:55.856 "zcopy": true, 00:20:55.856 "get_zone_info": false, 00:20:55.856 "zone_management": false, 00:20:55.856 "zone_append": false, 00:20:55.856 "compare": false, 00:20:55.856 "compare_and_write": false, 00:20:55.856 "abort": true, 00:20:55.856 "seek_hole": false, 00:20:55.856 "seek_data": false, 00:20:55.856 "copy": true, 00:20:55.856 "nvme_iov_md": false 00:20:55.856 }, 00:20:55.856 "memory_domains": [ 00:20:55.856 { 00:20:55.856 "dma_device_id": "system", 00:20:55.856 "dma_device_type": 1 00:20:55.856 }, 00:20:55.856 { 00:20:55.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.856 "dma_device_type": 2 00:20:55.856 } 00:20:55.856 ], 00:20:55.856 "driver_specific": {} 00:20:55.856 }' 00:20:55.856 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:55.856 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:55.856 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:55.856 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:55.856 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:55.856 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:55.856 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:55.856 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:56.115 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:56.115 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.115 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.115 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:56.115 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:56.115 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:56.115 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:56.374 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:56.374 "name": "BaseBdev2", 00:20:56.374 "aliases": [ 00:20:56.374 "12ffabc9-833a-4a5f-ab46-6bf5b09ea9e6" 00:20:56.374 ], 00:20:56.374 "product_name": "Malloc disk", 00:20:56.374 "block_size": 512, 00:20:56.374 "num_blocks": 65536, 00:20:56.374 "uuid": "12ffabc9-833a-4a5f-ab46-6bf5b09ea9e6", 00:20:56.374 "assigned_rate_limits": { 00:20:56.374 "rw_ios_per_sec": 0, 00:20:56.374 "rw_mbytes_per_sec": 0, 00:20:56.374 "r_mbytes_per_sec": 0, 00:20:56.374 "w_mbytes_per_sec": 0 00:20:56.374 }, 00:20:56.374 "claimed": true, 00:20:56.374 "claim_type": "exclusive_write", 00:20:56.374 "zoned": false, 00:20:56.374 "supported_io_types": { 00:20:56.374 "read": true, 00:20:56.374 "write": true, 00:20:56.374 "unmap": true, 00:20:56.374 "flush": true, 00:20:56.374 "reset": true, 00:20:56.374 "nvme_admin": false, 00:20:56.374 "nvme_io": false, 00:20:56.374 "nvme_io_md": false, 00:20:56.374 "write_zeroes": true, 00:20:56.374 "zcopy": true, 00:20:56.374 "get_zone_info": false, 00:20:56.374 "zone_management": false, 00:20:56.374 "zone_append": false, 00:20:56.374 "compare": false, 00:20:56.374 "compare_and_write": false, 00:20:56.374 "abort": true, 00:20:56.374 "seek_hole": false, 00:20:56.374 "seek_data": false, 00:20:56.374 "copy": true, 00:20:56.374 "nvme_iov_md": false 00:20:56.374 }, 00:20:56.374 "memory_domains": [ 00:20:56.374 { 00:20:56.374 "dma_device_id": "system", 00:20:56.374 "dma_device_type": 1 00:20:56.374 }, 00:20:56.374 { 00:20:56.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.374 "dma_device_type": 2 00:20:56.374 } 00:20:56.374 ], 00:20:56.374 "driver_specific": {} 00:20:56.374 }' 00:20:56.374 04:16:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.374 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.374 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:56.374 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:56.374 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:56.374 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:56.374 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:56.633 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:56.633 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:56.633 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.633 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.633 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:56.633 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:56.633 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:56.633 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:56.892 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:56.892 "name": "BaseBdev3", 00:20:56.892 "aliases": [ 00:20:56.892 "57a03b27-16da-453a-b935-7253b0fef68d" 00:20:56.892 ], 00:20:56.892 "product_name": "Malloc disk", 00:20:56.892 "block_size": 512, 00:20:56.892 "num_blocks": 65536, 00:20:56.892 "uuid": "57a03b27-16da-453a-b935-7253b0fef68d", 00:20:56.892 "assigned_rate_limits": { 00:20:56.892 "rw_ios_per_sec": 0, 00:20:56.892 "rw_mbytes_per_sec": 0, 00:20:56.892 "r_mbytes_per_sec": 0, 00:20:56.892 "w_mbytes_per_sec": 0 00:20:56.892 }, 00:20:56.892 "claimed": true, 00:20:56.892 "claim_type": "exclusive_write", 00:20:56.892 "zoned": false, 00:20:56.892 "supported_io_types": { 00:20:56.892 "read": true, 00:20:56.892 "write": true, 00:20:56.892 "unmap": true, 00:20:56.892 "flush": true, 00:20:56.892 "reset": true, 00:20:56.892 "nvme_admin": false, 00:20:56.892 "nvme_io": false, 00:20:56.892 "nvme_io_md": false, 00:20:56.892 "write_zeroes": true, 00:20:56.892 "zcopy": true, 00:20:56.892 "get_zone_info": false, 00:20:56.892 "zone_management": false, 00:20:56.892 "zone_append": false, 00:20:56.892 "compare": false, 00:20:56.892 "compare_and_write": false, 00:20:56.892 "abort": true, 00:20:56.892 "seek_hole": false, 00:20:56.892 "seek_data": false, 00:20:56.892 "copy": true, 00:20:56.892 "nvme_iov_md": false 00:20:56.892 }, 00:20:56.892 "memory_domains": [ 00:20:56.892 { 00:20:56.892 "dma_device_id": "system", 00:20:56.892 "dma_device_type": 1 00:20:56.892 }, 00:20:56.892 { 00:20:56.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.892 "dma_device_type": 2 00:20:56.892 } 00:20:56.892 ], 00:20:56.892 "driver_specific": {} 00:20:56.892 }' 00:20:56.892 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.892 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.893 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:56.893 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:56.893 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:57.152 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:57.152 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.152 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.152 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:57.152 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.152 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.152 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:57.152 04:16:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:57.412 [2024-07-23 04:16:06.038158] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:57.412 [2024-07-23 04:16:06.038198] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:57.412 [2024-07-23 04:16:06.038290] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:57.412 [2024-07-23 04:16:06.038632] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:57.412 [2024-07-23 04:16:06.038650] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041780 name Existed_Raid, state offline 00:20:57.412 04:16:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2685522 00:20:57.412 04:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2685522 ']' 00:20:57.412 04:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2685522 00:20:57.412 04:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:20:57.412 04:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.412 04:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2685522 00:20:57.412 04:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:57.412 04:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:57.412 04:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2685522' 00:20:57.412 killing process with pid 2685522 00:20:57.412 04:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2685522 00:20:57.412 [2024-07-23 04:16:06.115873] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:57.412 04:16:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2685522 00:20:57.671 [2024-07-23 04:16:06.446873] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:59.578 04:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:20:59.578 00:20:59.578 real 0m29.483s 00:20:59.578 user 0m51.430s 00:20:59.578 sys 0m5.144s 00:20:59.578 04:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:59.578 04:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.578 ************************************ 00:20:59.578 END TEST raid_state_function_test_sb 00:20:59.578 ************************************ 00:20:59.578 04:16:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:59.578 04:16:08 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:20:59.578 04:16:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:59.578 04:16:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:59.578 04:16:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:59.578 ************************************ 00:20:59.578 START TEST raid_superblock_test 00:20:59.578 ************************************ 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 3 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2691014 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2691014 /var/tmp/spdk-raid.sock 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2691014 ']' 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:59.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:59.578 04:16:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.838 [2024-07-23 04:16:08.405289] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:59.838 [2024-07-23 04:16:08.405400] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2691014 ] 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:59.838 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:59.838 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:00.098 [2024-07-23 04:16:08.632656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.357 [2024-07-23 04:16:08.914348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.616 [2024-07-23 04:16:09.260128] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:00.616 [2024-07-23 04:16:09.260183] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:00.875 04:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:00.875 04:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:21:00.875 04:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:21:00.875 04:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:00.875 04:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:21:00.875 04:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:21:00.875 04:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:00.875 04:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:00.875 04:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:00.875 04:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:00.875 04:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:01.134 malloc1 00:21:01.134 04:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:01.393 [2024-07-23 04:16:10.056164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:01.393 [2024-07-23 04:16:10.056229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.393 [2024-07-23 04:16:10.056261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:21:01.393 [2024-07-23 04:16:10.056278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.393 [2024-07-23 04:16:10.059072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.393 [2024-07-23 04:16:10.059107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:01.393 pt1 00:21:01.393 04:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:01.393 04:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:01.393 04:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:21:01.393 04:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:21:01.393 04:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:01.393 04:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:01.393 04:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:01.393 04:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:01.393 04:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:01.652 malloc2 00:21:01.652 04:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:01.910 [2024-07-23 04:16:10.569229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:01.910 [2024-07-23 04:16:10.569287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.910 [2024-07-23 04:16:10.569315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:21:01.910 [2024-07-23 04:16:10.569331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.910 [2024-07-23 04:16:10.572106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.910 [2024-07-23 04:16:10.572157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:01.910 pt2 00:21:01.910 04:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:01.910 04:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:01.910 04:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:21:01.910 04:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:21:01.910 04:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:01.910 04:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:01.910 04:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:01.910 04:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:01.910 04:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:02.168 malloc3 00:21:02.168 04:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:02.426 [2024-07-23 04:16:11.082271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:02.426 [2024-07-23 04:16:11.082333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.426 [2024-07-23 04:16:11.082364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:21:02.426 [2024-07-23 04:16:11.082380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.426 [2024-07-23 04:16:11.085117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.426 [2024-07-23 04:16:11.085160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:02.426 pt3 00:21:02.426 04:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:02.426 04:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:02.426 04:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:21:02.685 [2024-07-23 04:16:11.310932] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:02.685 [2024-07-23 04:16:11.313278] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:02.685 [2024-07-23 04:16:11.313372] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:02.685 [2024-07-23 04:16:11.313611] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041480 00:21:02.685 [2024-07-23 04:16:11.313634] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:02.685 [2024-07-23 04:16:11.313979] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:21:02.685 [2024-07-23 04:16:11.314244] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041480 00:21:02.685 [2024-07-23 04:16:11.314261] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041480 00:21:02.685 [2024-07-23 04:16:11.314459] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.685 04:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:02.685 04:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:02.685 04:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:02.685 04:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:02.685 04:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:02.685 04:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:02.685 04:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:02.685 04:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:02.685 04:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:02.685 04:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:02.685 04:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.685 04:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.944 04:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:02.944 "name": "raid_bdev1", 00:21:02.944 "uuid": "c8f0afa8-0380-482d-aace-91f1daf84b89", 00:21:02.944 "strip_size_kb": 0, 00:21:02.944 "state": "online", 00:21:02.944 "raid_level": "raid1", 00:21:02.944 "superblock": true, 00:21:02.944 "num_base_bdevs": 3, 00:21:02.944 "num_base_bdevs_discovered": 3, 00:21:02.944 "num_base_bdevs_operational": 3, 00:21:02.944 "base_bdevs_list": [ 00:21:02.944 { 00:21:02.944 "name": "pt1", 00:21:02.944 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:02.944 "is_configured": true, 00:21:02.944 "data_offset": 2048, 00:21:02.944 "data_size": 63488 00:21:02.944 }, 00:21:02.944 { 00:21:02.944 "name": "pt2", 00:21:02.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:02.944 "is_configured": true, 00:21:02.944 "data_offset": 2048, 00:21:02.944 "data_size": 63488 00:21:02.944 }, 00:21:02.944 { 00:21:02.944 "name": "pt3", 00:21:02.944 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:02.944 "is_configured": true, 00:21:02.944 "data_offset": 2048, 00:21:02.944 "data_size": 63488 00:21:02.944 } 00:21:02.944 ] 00:21:02.944 }' 00:21:02.944 04:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:02.944 04:16:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.512 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:21:03.512 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:03.512 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:03.512 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:03.512 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:03.512 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:03.512 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:03.512 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:03.772 [2024-07-23 04:16:12.313922] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:03.772 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:03.772 "name": "raid_bdev1", 00:21:03.772 "aliases": [ 00:21:03.772 "c8f0afa8-0380-482d-aace-91f1daf84b89" 00:21:03.772 ], 00:21:03.772 "product_name": "Raid Volume", 00:21:03.772 "block_size": 512, 00:21:03.772 "num_blocks": 63488, 00:21:03.772 "uuid": "c8f0afa8-0380-482d-aace-91f1daf84b89", 00:21:03.772 "assigned_rate_limits": { 00:21:03.772 "rw_ios_per_sec": 0, 00:21:03.772 "rw_mbytes_per_sec": 0, 00:21:03.772 "r_mbytes_per_sec": 0, 00:21:03.772 "w_mbytes_per_sec": 0 00:21:03.772 }, 00:21:03.772 "claimed": false, 00:21:03.772 "zoned": false, 00:21:03.772 "supported_io_types": { 00:21:03.772 "read": true, 00:21:03.772 "write": true, 00:21:03.772 "unmap": false, 00:21:03.772 "flush": false, 00:21:03.772 "reset": true, 00:21:03.772 "nvme_admin": false, 00:21:03.772 "nvme_io": false, 00:21:03.772 "nvme_io_md": false, 00:21:03.772 "write_zeroes": true, 00:21:03.772 "zcopy": false, 00:21:03.772 "get_zone_info": false, 00:21:03.772 "zone_management": false, 00:21:03.772 "zone_append": false, 00:21:03.772 "compare": false, 00:21:03.772 "compare_and_write": false, 00:21:03.772 "abort": false, 00:21:03.772 "seek_hole": false, 00:21:03.772 "seek_data": false, 00:21:03.772 "copy": false, 00:21:03.772 "nvme_iov_md": false 00:21:03.772 }, 00:21:03.772 "memory_domains": [ 00:21:03.772 { 00:21:03.772 "dma_device_id": "system", 00:21:03.772 "dma_device_type": 1 00:21:03.772 }, 00:21:03.772 { 00:21:03.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.772 "dma_device_type": 2 00:21:03.772 }, 00:21:03.772 { 00:21:03.772 "dma_device_id": "system", 00:21:03.772 "dma_device_type": 1 00:21:03.772 }, 00:21:03.772 { 00:21:03.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.772 "dma_device_type": 2 00:21:03.772 }, 00:21:03.772 { 00:21:03.772 "dma_device_id": "system", 00:21:03.772 "dma_device_type": 1 00:21:03.772 }, 00:21:03.772 { 00:21:03.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.772 "dma_device_type": 2 00:21:03.772 } 00:21:03.772 ], 00:21:03.772 "driver_specific": { 00:21:03.772 "raid": { 00:21:03.772 "uuid": "c8f0afa8-0380-482d-aace-91f1daf84b89", 00:21:03.772 "strip_size_kb": 0, 00:21:03.772 "state": "online", 00:21:03.772 "raid_level": "raid1", 00:21:03.772 "superblock": true, 00:21:03.772 "num_base_bdevs": 3, 00:21:03.772 "num_base_bdevs_discovered": 3, 00:21:03.772 "num_base_bdevs_operational": 3, 00:21:03.772 "base_bdevs_list": [ 00:21:03.772 { 00:21:03.772 "name": "pt1", 00:21:03.772 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:03.772 "is_configured": true, 00:21:03.772 "data_offset": 2048, 00:21:03.772 "data_size": 63488 00:21:03.772 }, 00:21:03.772 { 00:21:03.772 "name": "pt2", 00:21:03.772 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:03.772 "is_configured": true, 00:21:03.772 "data_offset": 2048, 00:21:03.772 "data_size": 63488 00:21:03.772 }, 00:21:03.772 { 00:21:03.772 "name": "pt3", 00:21:03.772 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:03.772 "is_configured": true, 00:21:03.772 "data_offset": 2048, 00:21:03.772 "data_size": 63488 00:21:03.772 } 00:21:03.772 ] 00:21:03.772 } 00:21:03.772 } 00:21:03.772 }' 00:21:03.772 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:03.772 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:03.772 pt2 00:21:03.772 pt3' 00:21:03.772 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:03.772 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:03.772 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:03.772 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:03.772 "name": "pt1", 00:21:03.772 "aliases": [ 00:21:03.772 "00000000-0000-0000-0000-000000000001" 00:21:03.772 ], 00:21:03.772 "product_name": "passthru", 00:21:03.772 "block_size": 512, 00:21:03.772 "num_blocks": 65536, 00:21:03.772 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:03.772 "assigned_rate_limits": { 00:21:03.772 "rw_ios_per_sec": 0, 00:21:03.772 "rw_mbytes_per_sec": 0, 00:21:03.772 "r_mbytes_per_sec": 0, 00:21:03.772 "w_mbytes_per_sec": 0 00:21:03.772 }, 00:21:03.772 "claimed": true, 00:21:03.772 "claim_type": "exclusive_write", 00:21:03.772 "zoned": false, 00:21:03.772 "supported_io_types": { 00:21:03.772 "read": true, 00:21:03.772 "write": true, 00:21:03.772 "unmap": true, 00:21:03.772 "flush": true, 00:21:03.772 "reset": true, 00:21:03.772 "nvme_admin": false, 00:21:03.772 "nvme_io": false, 00:21:03.772 "nvme_io_md": false, 00:21:03.772 "write_zeroes": true, 00:21:03.772 "zcopy": true, 00:21:03.772 "get_zone_info": false, 00:21:03.772 "zone_management": false, 00:21:03.772 "zone_append": false, 00:21:03.772 "compare": false, 00:21:03.772 "compare_and_write": false, 00:21:03.772 "abort": true, 00:21:03.772 "seek_hole": false, 00:21:03.772 "seek_data": false, 00:21:03.772 "copy": true, 00:21:03.772 "nvme_iov_md": false 00:21:03.772 }, 00:21:03.772 "memory_domains": [ 00:21:03.772 { 00:21:03.772 "dma_device_id": "system", 00:21:03.772 "dma_device_type": 1 00:21:03.772 }, 00:21:03.772 { 00:21:03.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.772 "dma_device_type": 2 00:21:03.772 } 00:21:03.772 ], 00:21:03.772 "driver_specific": { 00:21:03.772 "passthru": { 00:21:03.772 "name": "pt1", 00:21:03.772 "base_bdev_name": "malloc1" 00:21:03.772 } 00:21:03.772 } 00:21:03.772 }' 00:21:03.772 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:04.032 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:04.032 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:04.032 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:04.032 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:04.032 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:04.032 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:04.032 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:04.032 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:04.032 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:04.289 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:04.289 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:04.289 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:04.289 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:04.289 04:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:04.547 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:04.547 "name": "pt2", 00:21:04.547 "aliases": [ 00:21:04.547 "00000000-0000-0000-0000-000000000002" 00:21:04.547 ], 00:21:04.547 "product_name": "passthru", 00:21:04.547 "block_size": 512, 00:21:04.547 "num_blocks": 65536, 00:21:04.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:04.547 "assigned_rate_limits": { 00:21:04.547 "rw_ios_per_sec": 0, 00:21:04.548 "rw_mbytes_per_sec": 0, 00:21:04.548 "r_mbytes_per_sec": 0, 00:21:04.548 "w_mbytes_per_sec": 0 00:21:04.548 }, 00:21:04.548 "claimed": true, 00:21:04.548 "claim_type": "exclusive_write", 00:21:04.548 "zoned": false, 00:21:04.548 "supported_io_types": { 00:21:04.548 "read": true, 00:21:04.548 "write": true, 00:21:04.548 "unmap": true, 00:21:04.548 "flush": true, 00:21:04.548 "reset": true, 00:21:04.548 "nvme_admin": false, 00:21:04.548 "nvme_io": false, 00:21:04.548 "nvme_io_md": false, 00:21:04.548 "write_zeroes": true, 00:21:04.548 "zcopy": true, 00:21:04.548 "get_zone_info": false, 00:21:04.548 "zone_management": false, 00:21:04.548 "zone_append": false, 00:21:04.548 "compare": false, 00:21:04.548 "compare_and_write": false, 00:21:04.548 "abort": true, 00:21:04.548 "seek_hole": false, 00:21:04.548 "seek_data": false, 00:21:04.548 "copy": true, 00:21:04.548 "nvme_iov_md": false 00:21:04.548 }, 00:21:04.548 "memory_domains": [ 00:21:04.548 { 00:21:04.548 "dma_device_id": "system", 00:21:04.548 "dma_device_type": 1 00:21:04.548 }, 00:21:04.548 { 00:21:04.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.548 "dma_device_type": 2 00:21:04.548 } 00:21:04.548 ], 00:21:04.548 "driver_specific": { 00:21:04.548 "passthru": { 00:21:04.548 "name": "pt2", 00:21:04.548 "base_bdev_name": "malloc2" 00:21:04.548 } 00:21:04.548 } 00:21:04.548 }' 00:21:04.548 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:04.548 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:04.548 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:04.548 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:04.548 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:04.548 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:04.548 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:04.548 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:04.806 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:04.806 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:04.806 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:04.806 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:04.806 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:04.806 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:04.806 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:05.063 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:05.063 "name": "pt3", 00:21:05.063 "aliases": [ 00:21:05.063 "00000000-0000-0000-0000-000000000003" 00:21:05.063 ], 00:21:05.063 "product_name": "passthru", 00:21:05.063 "block_size": 512, 00:21:05.063 "num_blocks": 65536, 00:21:05.063 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:05.063 "assigned_rate_limits": { 00:21:05.063 "rw_ios_per_sec": 0, 00:21:05.063 "rw_mbytes_per_sec": 0, 00:21:05.063 "r_mbytes_per_sec": 0, 00:21:05.063 "w_mbytes_per_sec": 0 00:21:05.063 }, 00:21:05.063 "claimed": true, 00:21:05.063 "claim_type": "exclusive_write", 00:21:05.063 "zoned": false, 00:21:05.063 "supported_io_types": { 00:21:05.063 "read": true, 00:21:05.063 "write": true, 00:21:05.063 "unmap": true, 00:21:05.063 "flush": true, 00:21:05.063 "reset": true, 00:21:05.063 "nvme_admin": false, 00:21:05.063 "nvme_io": false, 00:21:05.063 "nvme_io_md": false, 00:21:05.063 "write_zeroes": true, 00:21:05.063 "zcopy": true, 00:21:05.063 "get_zone_info": false, 00:21:05.063 "zone_management": false, 00:21:05.063 "zone_append": false, 00:21:05.063 "compare": false, 00:21:05.063 "compare_and_write": false, 00:21:05.063 "abort": true, 00:21:05.063 "seek_hole": false, 00:21:05.063 "seek_data": false, 00:21:05.063 "copy": true, 00:21:05.063 "nvme_iov_md": false 00:21:05.063 }, 00:21:05.063 "memory_domains": [ 00:21:05.063 { 00:21:05.063 "dma_device_id": "system", 00:21:05.063 "dma_device_type": 1 00:21:05.063 }, 00:21:05.063 { 00:21:05.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.063 "dma_device_type": 2 00:21:05.063 } 00:21:05.063 ], 00:21:05.063 "driver_specific": { 00:21:05.063 "passthru": { 00:21:05.063 "name": "pt3", 00:21:05.063 "base_bdev_name": "malloc3" 00:21:05.063 } 00:21:05.063 } 00:21:05.063 }' 00:21:05.063 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:05.063 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:05.063 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:05.063 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:05.063 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:05.321 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:05.321 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:05.321 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:05.321 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:05.321 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:05.321 04:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:05.321 04:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:05.321 04:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:05.321 04:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:21:05.582 [2024-07-23 04:16:14.235170] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:05.582 04:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=c8f0afa8-0380-482d-aace-91f1daf84b89 00:21:05.582 04:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z c8f0afa8-0380-482d-aace-91f1daf84b89 ']' 00:21:05.582 04:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:05.840 [2024-07-23 04:16:14.451361] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:05.840 [2024-07-23 04:16:14.451397] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:05.840 [2024-07-23 04:16:14.451483] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:05.840 [2024-07-23 04:16:14.451572] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:05.840 [2024-07-23 04:16:14.451591] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041480 name raid_bdev1, state offline 00:21:05.840 04:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.840 04:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:21:06.099 04:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:21:06.099 04:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:21:06.099 04:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:06.099 04:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:06.357 04:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:06.357 04:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:06.358 04:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:06.358 04:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:06.618 04:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:06.618 04:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:06.903 04:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:21:06.903 04:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:06.903 04:16:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:21:06.903 04:16:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:06.903 04:16:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:06.903 04:16:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:06.903 04:16:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:06.903 04:16:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:06.903 04:16:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:06.903 04:16:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:06.903 04:16:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:06.903 04:16:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:21:06.903 04:16:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:07.176 [2024-07-23 04:16:15.790907] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:07.176 [2024-07-23 04:16:15.793257] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:07.176 [2024-07-23 04:16:15.793324] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:07.176 [2024-07-23 04:16:15.793386] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:07.176 [2024-07-23 04:16:15.793441] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:07.176 [2024-07-23 04:16:15.793469] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:07.176 [2024-07-23 04:16:15.793495] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:07.176 [2024-07-23 04:16:15.793510] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state configuring 00:21:07.176 request: 00:21:07.176 { 00:21:07.176 "name": "raid_bdev1", 00:21:07.176 "raid_level": "raid1", 00:21:07.176 "base_bdevs": [ 00:21:07.176 "malloc1", 00:21:07.176 "malloc2", 00:21:07.176 "malloc3" 00:21:07.176 ], 00:21:07.176 "superblock": false, 00:21:07.176 "method": "bdev_raid_create", 00:21:07.176 "req_id": 1 00:21:07.176 } 00:21:07.176 Got JSON-RPC error response 00:21:07.176 response: 00:21:07.176 { 00:21:07.176 "code": -17, 00:21:07.176 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:07.176 } 00:21:07.176 04:16:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:21:07.176 04:16:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:07.176 04:16:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:07.176 04:16:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:07.176 04:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.176 04:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:21:07.435 04:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:21:07.435 04:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:21:07.435 04:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:07.694 [2024-07-23 04:16:16.244094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:07.694 [2024-07-23 04:16:16.244167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.694 [2024-07-23 04:16:16.244200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:21:07.694 [2024-07-23 04:16:16.244216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.694 [2024-07-23 04:16:16.246990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.694 [2024-07-23 04:16:16.247024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:07.694 [2024-07-23 04:16:16.247122] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:07.694 [2024-07-23 04:16:16.247221] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:07.694 pt1 00:21:07.694 04:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:07.694 04:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:07.694 04:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:07.694 04:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:07.694 04:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:07.694 04:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:07.694 04:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:07.694 04:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:07.694 04:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:07.694 04:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:07.694 04:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.694 04:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.953 04:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:07.953 "name": "raid_bdev1", 00:21:07.953 "uuid": "c8f0afa8-0380-482d-aace-91f1daf84b89", 00:21:07.953 "strip_size_kb": 0, 00:21:07.953 "state": "configuring", 00:21:07.953 "raid_level": "raid1", 00:21:07.953 "superblock": true, 00:21:07.953 "num_base_bdevs": 3, 00:21:07.953 "num_base_bdevs_discovered": 1, 00:21:07.953 "num_base_bdevs_operational": 3, 00:21:07.953 "base_bdevs_list": [ 00:21:07.953 { 00:21:07.953 "name": "pt1", 00:21:07.953 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:07.953 "is_configured": true, 00:21:07.953 "data_offset": 2048, 00:21:07.953 "data_size": 63488 00:21:07.953 }, 00:21:07.953 { 00:21:07.953 "name": null, 00:21:07.953 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:07.953 "is_configured": false, 00:21:07.953 "data_offset": 2048, 00:21:07.953 "data_size": 63488 00:21:07.953 }, 00:21:07.953 { 00:21:07.953 "name": null, 00:21:07.953 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:07.953 "is_configured": false, 00:21:07.953 "data_offset": 2048, 00:21:07.953 "data_size": 63488 00:21:07.953 } 00:21:07.953 ] 00:21:07.953 }' 00:21:07.953 04:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:07.953 04:16:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.521 04:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:21:08.521 04:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:08.521 [2024-07-23 04:16:17.246820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:08.521 [2024-07-23 04:16:17.246881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.521 [2024-07-23 04:16:17.246912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:21:08.521 [2024-07-23 04:16:17.246927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.521 [2024-07-23 04:16:17.247497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.521 [2024-07-23 04:16:17.247523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:08.521 [2024-07-23 04:16:17.247615] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:08.521 [2024-07-23 04:16:17.247648] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:08.521 pt2 00:21:08.521 04:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:08.780 [2024-07-23 04:16:17.463469] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:08.780 04:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:08.780 04:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:08.780 04:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:08.780 04:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:08.780 04:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:08.780 04:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:08.780 04:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:08.780 04:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:08.780 04:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:08.780 04:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:08.780 04:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.780 04:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.039 04:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:09.039 "name": "raid_bdev1", 00:21:09.039 "uuid": "c8f0afa8-0380-482d-aace-91f1daf84b89", 00:21:09.039 "strip_size_kb": 0, 00:21:09.039 "state": "configuring", 00:21:09.039 "raid_level": "raid1", 00:21:09.039 "superblock": true, 00:21:09.039 "num_base_bdevs": 3, 00:21:09.039 "num_base_bdevs_discovered": 1, 00:21:09.039 "num_base_bdevs_operational": 3, 00:21:09.039 "base_bdevs_list": [ 00:21:09.039 { 00:21:09.039 "name": "pt1", 00:21:09.039 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:09.039 "is_configured": true, 00:21:09.039 "data_offset": 2048, 00:21:09.039 "data_size": 63488 00:21:09.039 }, 00:21:09.039 { 00:21:09.039 "name": null, 00:21:09.039 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:09.039 "is_configured": false, 00:21:09.039 "data_offset": 2048, 00:21:09.039 "data_size": 63488 00:21:09.039 }, 00:21:09.039 { 00:21:09.039 "name": null, 00:21:09.039 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:09.039 "is_configured": false, 00:21:09.039 "data_offset": 2048, 00:21:09.039 "data_size": 63488 00:21:09.039 } 00:21:09.039 ] 00:21:09.039 }' 00:21:09.039 04:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:09.039 04:16:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.606 04:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:21:09.607 04:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:09.607 04:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:09.865 [2024-07-23 04:16:18.442105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:09.865 [2024-07-23 04:16:18.442182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.865 [2024-07-23 04:16:18.442209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:21:09.865 [2024-07-23 04:16:18.442228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.865 [2024-07-23 04:16:18.442783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.865 [2024-07-23 04:16:18.442811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:09.865 [2024-07-23 04:16:18.442901] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:09.865 [2024-07-23 04:16:18.442934] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:09.865 pt2 00:21:09.865 04:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:09.865 04:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:09.865 04:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:10.124 [2024-07-23 04:16:18.670693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:10.124 [2024-07-23 04:16:18.670761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:10.124 [2024-07-23 04:16:18.670792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042f80 00:21:10.124 [2024-07-23 04:16:18.670810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:10.124 [2024-07-23 04:16:18.671421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:10.124 [2024-07-23 04:16:18.671452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:10.124 [2024-07-23 04:16:18.671549] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:10.124 [2024-07-23 04:16:18.671583] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:10.124 [2024-07-23 04:16:18.671775] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042680 00:21:10.124 [2024-07-23 04:16:18.671792] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:10.124 [2024-07-23 04:16:18.672099] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:21:10.124 [2024-07-23 04:16:18.672353] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042680 00:21:10.124 [2024-07-23 04:16:18.672369] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042680 00:21:10.124 [2024-07-23 04:16:18.672565] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.124 pt3 00:21:10.124 04:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:10.124 04:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:10.124 04:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:10.124 04:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:10.124 04:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:10.124 04:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:10.124 04:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:10.124 04:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:10.124 04:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:10.124 04:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:10.124 04:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:10.124 04:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:10.124 04:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.124 04:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.382 04:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:10.382 "name": "raid_bdev1", 00:21:10.382 "uuid": "c8f0afa8-0380-482d-aace-91f1daf84b89", 00:21:10.382 "strip_size_kb": 0, 00:21:10.382 "state": "online", 00:21:10.382 "raid_level": "raid1", 00:21:10.382 "superblock": true, 00:21:10.382 "num_base_bdevs": 3, 00:21:10.382 "num_base_bdevs_discovered": 3, 00:21:10.382 "num_base_bdevs_operational": 3, 00:21:10.382 "base_bdevs_list": [ 00:21:10.382 { 00:21:10.382 "name": "pt1", 00:21:10.382 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:10.382 "is_configured": true, 00:21:10.382 "data_offset": 2048, 00:21:10.382 "data_size": 63488 00:21:10.382 }, 00:21:10.382 { 00:21:10.382 "name": "pt2", 00:21:10.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:10.383 "is_configured": true, 00:21:10.383 "data_offset": 2048, 00:21:10.383 "data_size": 63488 00:21:10.383 }, 00:21:10.383 { 00:21:10.383 "name": "pt3", 00:21:10.383 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:10.383 "is_configured": true, 00:21:10.383 "data_offset": 2048, 00:21:10.383 "data_size": 63488 00:21:10.383 } 00:21:10.383 ] 00:21:10.383 }' 00:21:10.383 04:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:10.383 04:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.950 04:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:21:10.950 04:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:10.950 04:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:10.950 04:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:10.950 04:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:10.950 04:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:10.950 04:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:10.950 04:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:10.950 [2024-07-23 04:16:19.649672] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:10.950 04:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:10.950 "name": "raid_bdev1", 00:21:10.950 "aliases": [ 00:21:10.950 "c8f0afa8-0380-482d-aace-91f1daf84b89" 00:21:10.950 ], 00:21:10.950 "product_name": "Raid Volume", 00:21:10.950 "block_size": 512, 00:21:10.950 "num_blocks": 63488, 00:21:10.950 "uuid": "c8f0afa8-0380-482d-aace-91f1daf84b89", 00:21:10.950 "assigned_rate_limits": { 00:21:10.950 "rw_ios_per_sec": 0, 00:21:10.950 "rw_mbytes_per_sec": 0, 00:21:10.950 "r_mbytes_per_sec": 0, 00:21:10.950 "w_mbytes_per_sec": 0 00:21:10.950 }, 00:21:10.950 "claimed": false, 00:21:10.950 "zoned": false, 00:21:10.950 "supported_io_types": { 00:21:10.950 "read": true, 00:21:10.950 "write": true, 00:21:10.950 "unmap": false, 00:21:10.950 "flush": false, 00:21:10.950 "reset": true, 00:21:10.950 "nvme_admin": false, 00:21:10.950 "nvme_io": false, 00:21:10.950 "nvme_io_md": false, 00:21:10.950 "write_zeroes": true, 00:21:10.950 "zcopy": false, 00:21:10.950 "get_zone_info": false, 00:21:10.950 "zone_management": false, 00:21:10.950 "zone_append": false, 00:21:10.950 "compare": false, 00:21:10.950 "compare_and_write": false, 00:21:10.950 "abort": false, 00:21:10.950 "seek_hole": false, 00:21:10.950 "seek_data": false, 00:21:10.950 "copy": false, 00:21:10.950 "nvme_iov_md": false 00:21:10.950 }, 00:21:10.950 "memory_domains": [ 00:21:10.950 { 00:21:10.950 "dma_device_id": "system", 00:21:10.950 "dma_device_type": 1 00:21:10.950 }, 00:21:10.950 { 00:21:10.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.950 "dma_device_type": 2 00:21:10.950 }, 00:21:10.950 { 00:21:10.950 "dma_device_id": "system", 00:21:10.950 "dma_device_type": 1 00:21:10.950 }, 00:21:10.950 { 00:21:10.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.950 "dma_device_type": 2 00:21:10.950 }, 00:21:10.950 { 00:21:10.950 "dma_device_id": "system", 00:21:10.950 "dma_device_type": 1 00:21:10.950 }, 00:21:10.950 { 00:21:10.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.950 "dma_device_type": 2 00:21:10.950 } 00:21:10.950 ], 00:21:10.950 "driver_specific": { 00:21:10.950 "raid": { 00:21:10.950 "uuid": "c8f0afa8-0380-482d-aace-91f1daf84b89", 00:21:10.950 "strip_size_kb": 0, 00:21:10.950 "state": "online", 00:21:10.950 "raid_level": "raid1", 00:21:10.950 "superblock": true, 00:21:10.950 "num_base_bdevs": 3, 00:21:10.950 "num_base_bdevs_discovered": 3, 00:21:10.950 "num_base_bdevs_operational": 3, 00:21:10.950 "base_bdevs_list": [ 00:21:10.950 { 00:21:10.950 "name": "pt1", 00:21:10.950 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:10.950 "is_configured": true, 00:21:10.950 "data_offset": 2048, 00:21:10.950 "data_size": 63488 00:21:10.950 }, 00:21:10.950 { 00:21:10.950 "name": "pt2", 00:21:10.950 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:10.950 "is_configured": true, 00:21:10.950 "data_offset": 2048, 00:21:10.950 "data_size": 63488 00:21:10.950 }, 00:21:10.950 { 00:21:10.950 "name": "pt3", 00:21:10.950 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:10.950 "is_configured": true, 00:21:10.950 "data_offset": 2048, 00:21:10.950 "data_size": 63488 00:21:10.950 } 00:21:10.950 ] 00:21:10.950 } 00:21:10.950 } 00:21:10.950 }' 00:21:10.950 04:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:10.950 04:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:10.950 pt2 00:21:10.950 pt3' 00:21:10.950 04:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:10.950 04:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:10.950 04:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:11.209 04:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:11.209 "name": "pt1", 00:21:11.209 "aliases": [ 00:21:11.209 "00000000-0000-0000-0000-000000000001" 00:21:11.209 ], 00:21:11.209 "product_name": "passthru", 00:21:11.209 "block_size": 512, 00:21:11.209 "num_blocks": 65536, 00:21:11.209 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:11.209 "assigned_rate_limits": { 00:21:11.209 "rw_ios_per_sec": 0, 00:21:11.209 "rw_mbytes_per_sec": 0, 00:21:11.209 "r_mbytes_per_sec": 0, 00:21:11.209 "w_mbytes_per_sec": 0 00:21:11.209 }, 00:21:11.209 "claimed": true, 00:21:11.209 "claim_type": "exclusive_write", 00:21:11.209 "zoned": false, 00:21:11.209 "supported_io_types": { 00:21:11.209 "read": true, 00:21:11.209 "write": true, 00:21:11.209 "unmap": true, 00:21:11.209 "flush": true, 00:21:11.209 "reset": true, 00:21:11.209 "nvme_admin": false, 00:21:11.209 "nvme_io": false, 00:21:11.209 "nvme_io_md": false, 00:21:11.209 "write_zeroes": true, 00:21:11.209 "zcopy": true, 00:21:11.209 "get_zone_info": false, 00:21:11.209 "zone_management": false, 00:21:11.209 "zone_append": false, 00:21:11.209 "compare": false, 00:21:11.209 "compare_and_write": false, 00:21:11.209 "abort": true, 00:21:11.209 "seek_hole": false, 00:21:11.209 "seek_data": false, 00:21:11.209 "copy": true, 00:21:11.209 "nvme_iov_md": false 00:21:11.209 }, 00:21:11.209 "memory_domains": [ 00:21:11.209 { 00:21:11.209 "dma_device_id": "system", 00:21:11.209 "dma_device_type": 1 00:21:11.209 }, 00:21:11.209 { 00:21:11.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.209 "dma_device_type": 2 00:21:11.209 } 00:21:11.209 ], 00:21:11.209 "driver_specific": { 00:21:11.209 "passthru": { 00:21:11.209 "name": "pt1", 00:21:11.209 "base_bdev_name": "malloc1" 00:21:11.209 } 00:21:11.209 } 00:21:11.209 }' 00:21:11.209 04:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:11.209 04:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:11.209 04:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:11.209 04:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:11.467 04:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:11.467 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:11.467 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:11.467 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:11.467 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:11.467 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:11.467 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:11.467 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:11.467 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:11.467 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:11.467 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:11.725 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:11.725 "name": "pt2", 00:21:11.725 "aliases": [ 00:21:11.725 "00000000-0000-0000-0000-000000000002" 00:21:11.725 ], 00:21:11.725 "product_name": "passthru", 00:21:11.725 "block_size": 512, 00:21:11.725 "num_blocks": 65536, 00:21:11.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:11.725 "assigned_rate_limits": { 00:21:11.725 "rw_ios_per_sec": 0, 00:21:11.725 "rw_mbytes_per_sec": 0, 00:21:11.725 "r_mbytes_per_sec": 0, 00:21:11.725 "w_mbytes_per_sec": 0 00:21:11.725 }, 00:21:11.725 "claimed": true, 00:21:11.725 "claim_type": "exclusive_write", 00:21:11.725 "zoned": false, 00:21:11.725 "supported_io_types": { 00:21:11.725 "read": true, 00:21:11.725 "write": true, 00:21:11.725 "unmap": true, 00:21:11.725 "flush": true, 00:21:11.725 "reset": true, 00:21:11.725 "nvme_admin": false, 00:21:11.725 "nvme_io": false, 00:21:11.725 "nvme_io_md": false, 00:21:11.725 "write_zeroes": true, 00:21:11.725 "zcopy": true, 00:21:11.725 "get_zone_info": false, 00:21:11.725 "zone_management": false, 00:21:11.725 "zone_append": false, 00:21:11.725 "compare": false, 00:21:11.725 "compare_and_write": false, 00:21:11.725 "abort": true, 00:21:11.725 "seek_hole": false, 00:21:11.725 "seek_data": false, 00:21:11.725 "copy": true, 00:21:11.725 "nvme_iov_md": false 00:21:11.725 }, 00:21:11.725 "memory_domains": [ 00:21:11.725 { 00:21:11.725 "dma_device_id": "system", 00:21:11.725 "dma_device_type": 1 00:21:11.725 }, 00:21:11.725 { 00:21:11.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.725 "dma_device_type": 2 00:21:11.725 } 00:21:11.725 ], 00:21:11.725 "driver_specific": { 00:21:11.725 "passthru": { 00:21:11.725 "name": "pt2", 00:21:11.725 "base_bdev_name": "malloc2" 00:21:11.725 } 00:21:11.725 } 00:21:11.725 }' 00:21:11.725 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:11.725 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:11.984 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:11.984 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:11.984 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:11.984 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:11.984 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:11.984 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:11.984 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:11.984 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.243 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.243 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:12.243 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:12.243 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:12.243 04:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:12.243 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:12.243 "name": "pt3", 00:21:12.243 "aliases": [ 00:21:12.243 "00000000-0000-0000-0000-000000000003" 00:21:12.243 ], 00:21:12.243 "product_name": "passthru", 00:21:12.243 "block_size": 512, 00:21:12.243 "num_blocks": 65536, 00:21:12.243 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:12.243 "assigned_rate_limits": { 00:21:12.243 "rw_ios_per_sec": 0, 00:21:12.243 "rw_mbytes_per_sec": 0, 00:21:12.243 "r_mbytes_per_sec": 0, 00:21:12.243 "w_mbytes_per_sec": 0 00:21:12.243 }, 00:21:12.243 "claimed": true, 00:21:12.243 "claim_type": "exclusive_write", 00:21:12.243 "zoned": false, 00:21:12.243 "supported_io_types": { 00:21:12.243 "read": true, 00:21:12.243 "write": true, 00:21:12.243 "unmap": true, 00:21:12.243 "flush": true, 00:21:12.243 "reset": true, 00:21:12.243 "nvme_admin": false, 00:21:12.243 "nvme_io": false, 00:21:12.243 "nvme_io_md": false, 00:21:12.243 "write_zeroes": true, 00:21:12.243 "zcopy": true, 00:21:12.243 "get_zone_info": false, 00:21:12.243 "zone_management": false, 00:21:12.243 "zone_append": false, 00:21:12.243 "compare": false, 00:21:12.243 "compare_and_write": false, 00:21:12.243 "abort": true, 00:21:12.243 "seek_hole": false, 00:21:12.243 "seek_data": false, 00:21:12.243 "copy": true, 00:21:12.243 "nvme_iov_md": false 00:21:12.243 }, 00:21:12.243 "memory_domains": [ 00:21:12.243 { 00:21:12.243 "dma_device_id": "system", 00:21:12.243 "dma_device_type": 1 00:21:12.243 }, 00:21:12.243 { 00:21:12.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.243 "dma_device_type": 2 00:21:12.243 } 00:21:12.243 ], 00:21:12.243 "driver_specific": { 00:21:12.243 "passthru": { 00:21:12.243 "name": "pt3", 00:21:12.243 "base_bdev_name": "malloc3" 00:21:12.243 } 00:21:12.243 } 00:21:12.243 }' 00:21:12.243 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.501 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.501 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:12.501 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.501 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.501 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:12.501 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.501 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.501 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:12.501 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.759 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.759 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:12.759 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:12.759 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:21:13.017 [2024-07-23 04:16:21.554839] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:13.017 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' c8f0afa8-0380-482d-aace-91f1daf84b89 '!=' c8f0afa8-0380-482d-aace-91f1daf84b89 ']' 00:21:13.017 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:21:13.017 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:13.017 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:13.017 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:13.017 [2024-07-23 04:16:21.783165] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:13.017 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:13.017 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:13.017 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:13.017 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:13.018 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:13.276 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:13.276 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:13.276 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:13.276 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:13.276 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:13.276 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.276 04:16:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.276 04:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:13.276 "name": "raid_bdev1", 00:21:13.276 "uuid": "c8f0afa8-0380-482d-aace-91f1daf84b89", 00:21:13.276 "strip_size_kb": 0, 00:21:13.276 "state": "online", 00:21:13.276 "raid_level": "raid1", 00:21:13.276 "superblock": true, 00:21:13.276 "num_base_bdevs": 3, 00:21:13.276 "num_base_bdevs_discovered": 2, 00:21:13.276 "num_base_bdevs_operational": 2, 00:21:13.276 "base_bdevs_list": [ 00:21:13.276 { 00:21:13.276 "name": null, 00:21:13.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.276 "is_configured": false, 00:21:13.276 "data_offset": 2048, 00:21:13.276 "data_size": 63488 00:21:13.276 }, 00:21:13.276 { 00:21:13.276 "name": "pt2", 00:21:13.276 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:13.276 "is_configured": true, 00:21:13.276 "data_offset": 2048, 00:21:13.276 "data_size": 63488 00:21:13.276 }, 00:21:13.276 { 00:21:13.276 "name": "pt3", 00:21:13.276 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:13.276 "is_configured": true, 00:21:13.276 "data_offset": 2048, 00:21:13.276 "data_size": 63488 00:21:13.276 } 00:21:13.276 ] 00:21:13.276 }' 00:21:13.276 04:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:13.276 04:16:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.211 04:16:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:14.469 [2024-07-23 04:16:23.106644] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:14.469 [2024-07-23 04:16:23.106677] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:14.469 [2024-07-23 04:16:23.106759] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:14.469 [2024-07-23 04:16:23.106828] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:14.469 [2024-07-23 04:16:23.106847] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state offline 00:21:14.469 04:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:21:14.469 04:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.726 04:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:21:14.726 04:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:21:14.726 04:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:21:14.726 04:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:14.726 04:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:14.984 04:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:21:14.984 04:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:14.984 04:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:15.242 04:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:21:15.242 04:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:15.242 04:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:21:15.242 04:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:21:15.242 04:16:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:15.242 [2024-07-23 04:16:24.025069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:15.242 [2024-07-23 04:16:24.025132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:15.242 [2024-07-23 04:16:24.025162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043280 00:21:15.242 [2024-07-23 04:16:24.025180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:15.500 [2024-07-23 04:16:24.027954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:15.500 [2024-07-23 04:16:24.027988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:15.500 [2024-07-23 04:16:24.028072] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:15.500 [2024-07-23 04:16:24.028131] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:15.500 pt2 00:21:15.500 04:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:15.500 04:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:15.500 04:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:15.500 04:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:15.500 04:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:15.500 04:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:15.500 04:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:15.500 04:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:15.500 04:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:15.501 04:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:15.501 04:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.501 04:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.501 04:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:15.501 "name": "raid_bdev1", 00:21:15.501 "uuid": "c8f0afa8-0380-482d-aace-91f1daf84b89", 00:21:15.501 "strip_size_kb": 0, 00:21:15.501 "state": "configuring", 00:21:15.501 "raid_level": "raid1", 00:21:15.501 "superblock": true, 00:21:15.501 "num_base_bdevs": 3, 00:21:15.501 "num_base_bdevs_discovered": 1, 00:21:15.501 "num_base_bdevs_operational": 2, 00:21:15.501 "base_bdevs_list": [ 00:21:15.501 { 00:21:15.501 "name": null, 00:21:15.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.501 "is_configured": false, 00:21:15.501 "data_offset": 2048, 00:21:15.501 "data_size": 63488 00:21:15.501 }, 00:21:15.501 { 00:21:15.501 "name": "pt2", 00:21:15.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:15.501 "is_configured": true, 00:21:15.501 "data_offset": 2048, 00:21:15.501 "data_size": 63488 00:21:15.501 }, 00:21:15.501 { 00:21:15.501 "name": null, 00:21:15.501 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:15.501 "is_configured": false, 00:21:15.501 "data_offset": 2048, 00:21:15.501 "data_size": 63488 00:21:15.501 } 00:21:15.501 ] 00:21:15.501 }' 00:21:15.501 04:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:15.501 04:16:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.068 04:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:21:16.068 04:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:21:16.068 04:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:21:16.068 04:16:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:16.326 [2024-07-23 04:16:25.055871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:16.326 [2024-07-23 04:16:25.055937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.326 [2024-07-23 04:16:25.055963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043b80 00:21:16.326 [2024-07-23 04:16:25.055981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.326 [2024-07-23 04:16:25.056531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.326 [2024-07-23 04:16:25.056560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:16.326 [2024-07-23 04:16:25.056645] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:16.326 [2024-07-23 04:16:25.056676] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:16.326 [2024-07-23 04:16:25.056846] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043880 00:21:16.326 [2024-07-23 04:16:25.056870] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:16.326 [2024-07-23 04:16:25.057175] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:21:16.326 [2024-07-23 04:16:25.057389] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043880 00:21:16.326 [2024-07-23 04:16:25.057404] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043880 00:21:16.326 [2024-07-23 04:16:25.057594] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:16.326 pt3 00:21:16.326 04:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:16.326 04:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:16.326 04:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:16.326 04:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:16.326 04:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:16.326 04:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:16.326 04:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:16.326 04:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:16.326 04:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:16.326 04:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:16.326 04:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.326 04:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.584 04:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:16.584 "name": "raid_bdev1", 00:21:16.584 "uuid": "c8f0afa8-0380-482d-aace-91f1daf84b89", 00:21:16.584 "strip_size_kb": 0, 00:21:16.584 "state": "online", 00:21:16.584 "raid_level": "raid1", 00:21:16.584 "superblock": true, 00:21:16.584 "num_base_bdevs": 3, 00:21:16.584 "num_base_bdevs_discovered": 2, 00:21:16.584 "num_base_bdevs_operational": 2, 00:21:16.584 "base_bdevs_list": [ 00:21:16.584 { 00:21:16.584 "name": null, 00:21:16.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.584 "is_configured": false, 00:21:16.584 "data_offset": 2048, 00:21:16.584 "data_size": 63488 00:21:16.584 }, 00:21:16.584 { 00:21:16.584 "name": "pt2", 00:21:16.584 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:16.584 "is_configured": true, 00:21:16.584 "data_offset": 2048, 00:21:16.584 "data_size": 63488 00:21:16.584 }, 00:21:16.584 { 00:21:16.584 "name": "pt3", 00:21:16.584 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:16.584 "is_configured": true, 00:21:16.584 "data_offset": 2048, 00:21:16.584 "data_size": 63488 00:21:16.584 } 00:21:16.584 ] 00:21:16.584 }' 00:21:16.584 04:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:16.584 04:16:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.151 04:16:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:17.409 [2024-07-23 04:16:26.026488] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:17.409 [2024-07-23 04:16:26.026524] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:17.409 [2024-07-23 04:16:26.026605] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:17.409 [2024-07-23 04:16:26.026678] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:17.409 [2024-07-23 04:16:26.026695] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043880 name raid_bdev1, state offline 00:21:17.409 04:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:21:17.409 04:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.667 04:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:21:17.667 04:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:21:17.667 04:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:21:17.667 04:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:21:17.667 04:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:17.926 04:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:18.185 [2024-07-23 04:16:26.712290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:18.185 [2024-07-23 04:16:26.712358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.185 [2024-07-23 04:16:26.712388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:21:18.185 [2024-07-23 04:16:26.712403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.185 [2024-07-23 04:16:26.715175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.185 [2024-07-23 04:16:26.715209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:18.185 [2024-07-23 04:16:26.715309] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:18.185 [2024-07-23 04:16:26.715367] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:18.185 [2024-07-23 04:16:26.715555] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:18.185 [2024-07-23 04:16:26.715576] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:18.185 [2024-07-23 04:16:26.715598] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000044480 name raid_bdev1, state configuring 00:21:18.185 [2024-07-23 04:16:26.715693] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:18.185 pt1 00:21:18.185 04:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:21:18.185 04:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:18.185 04:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:18.185 04:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:18.185 04:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:18.185 04:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:18.185 04:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:18.185 04:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:18.185 04:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:18.185 04:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:18.185 04:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:18.185 04:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.185 04:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.185 04:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:18.185 "name": "raid_bdev1", 00:21:18.185 "uuid": "c8f0afa8-0380-482d-aace-91f1daf84b89", 00:21:18.185 "strip_size_kb": 0, 00:21:18.185 "state": "configuring", 00:21:18.185 "raid_level": "raid1", 00:21:18.185 "superblock": true, 00:21:18.185 "num_base_bdevs": 3, 00:21:18.185 "num_base_bdevs_discovered": 1, 00:21:18.185 "num_base_bdevs_operational": 2, 00:21:18.185 "base_bdevs_list": [ 00:21:18.185 { 00:21:18.185 "name": null, 00:21:18.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.185 "is_configured": false, 00:21:18.185 "data_offset": 2048, 00:21:18.185 "data_size": 63488 00:21:18.185 }, 00:21:18.185 { 00:21:18.185 "name": "pt2", 00:21:18.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:18.185 "is_configured": true, 00:21:18.185 "data_offset": 2048, 00:21:18.185 "data_size": 63488 00:21:18.185 }, 00:21:18.185 { 00:21:18.185 "name": null, 00:21:18.185 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:18.185 "is_configured": false, 00:21:18.185 "data_offset": 2048, 00:21:18.185 "data_size": 63488 00:21:18.185 } 00:21:18.185 ] 00:21:18.185 }' 00:21:18.185 04:16:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:18.185 04:16:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.752 04:16:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:21:18.752 04:16:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:19.010 04:16:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:21:19.010 04:16:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:19.269 [2024-07-23 04:16:27.951779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:19.269 [2024-07-23 04:16:27.951840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:19.269 [2024-07-23 04:16:27.951869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044a80 00:21:19.269 [2024-07-23 04:16:27.951885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:19.269 [2024-07-23 04:16:27.952470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:19.269 [2024-07-23 04:16:27.952495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:19.269 [2024-07-23 04:16:27.952591] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:19.269 [2024-07-23 04:16:27.952618] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:19.269 [2024-07-23 04:16:27.952796] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000044780 00:21:19.269 [2024-07-23 04:16:27.952810] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:19.269 [2024-07-23 04:16:27.953115] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:21:19.269 [2024-07-23 04:16:27.953386] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000044780 00:21:19.269 [2024-07-23 04:16:27.953405] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000044780 00:21:19.269 [2024-07-23 04:16:27.953583] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:19.269 pt3 00:21:19.269 04:16:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:19.269 04:16:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:19.269 04:16:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:19.269 04:16:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:19.269 04:16:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:19.270 04:16:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:19.270 04:16:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:19.270 04:16:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:19.270 04:16:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:19.270 04:16:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:19.270 04:16:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.270 04:16:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.528 04:16:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:19.528 "name": "raid_bdev1", 00:21:19.528 "uuid": "c8f0afa8-0380-482d-aace-91f1daf84b89", 00:21:19.528 "strip_size_kb": 0, 00:21:19.528 "state": "online", 00:21:19.528 "raid_level": "raid1", 00:21:19.528 "superblock": true, 00:21:19.528 "num_base_bdevs": 3, 00:21:19.528 "num_base_bdevs_discovered": 2, 00:21:19.528 "num_base_bdevs_operational": 2, 00:21:19.528 "base_bdevs_list": [ 00:21:19.528 { 00:21:19.528 "name": null, 00:21:19.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.528 "is_configured": false, 00:21:19.528 "data_offset": 2048, 00:21:19.528 "data_size": 63488 00:21:19.528 }, 00:21:19.528 { 00:21:19.528 "name": "pt2", 00:21:19.528 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:19.528 "is_configured": true, 00:21:19.528 "data_offset": 2048, 00:21:19.528 "data_size": 63488 00:21:19.528 }, 00:21:19.528 { 00:21:19.528 "name": "pt3", 00:21:19.528 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:19.528 "is_configured": true, 00:21:19.528 "data_offset": 2048, 00:21:19.528 "data_size": 63488 00:21:19.528 } 00:21:19.528 ] 00:21:19.528 }' 00:21:19.528 04:16:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:19.528 04:16:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.128 04:16:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:21:20.128 04:16:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:20.387 04:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:21:20.387 04:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:21:20.387 04:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:20.646 [2024-07-23 04:16:29.227556] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:20.646 04:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' c8f0afa8-0380-482d-aace-91f1daf84b89 '!=' c8f0afa8-0380-482d-aace-91f1daf84b89 ']' 00:21:20.646 04:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2691014 00:21:20.646 04:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2691014 ']' 00:21:20.646 04:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2691014 00:21:20.646 04:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:21:20.646 04:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.646 04:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2691014 00:21:20.646 04:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:20.646 04:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:20.646 04:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2691014' 00:21:20.646 killing process with pid 2691014 00:21:20.646 04:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2691014 00:21:20.646 [2024-07-23 04:16:29.300282] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:20.646 [2024-07-23 04:16:29.300380] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:20.646 04:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2691014 00:21:20.646 [2024-07-23 04:16:29.300452] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:20.646 [2024-07-23 04:16:29.300472] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000044780 name raid_bdev1, state offline 00:21:20.906 [2024-07-23 04:16:29.632591] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:22.813 04:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:21:22.813 00:21:22.813 real 0m22.969s 00:21:22.813 user 0m40.027s 00:21:22.813 sys 0m4.008s 00:21:22.813 04:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:22.813 04:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.813 ************************************ 00:21:22.813 END TEST raid_superblock_test 00:21:22.813 ************************************ 00:21:22.813 04:16:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:22.813 04:16:31 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:21:22.813 04:16:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:22.813 04:16:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:22.813 04:16:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:22.813 ************************************ 00:21:22.813 START TEST raid_read_error_test 00:21:22.813 ************************************ 00:21:22.813 04:16:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 read 00:21:22.813 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:21:22.813 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:21:22.813 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:21:22.813 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:21:22.813 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:22.813 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:21:22.813 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:22.813 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:22.813 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:21:22.813 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:22.813 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:22.813 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:21:22.813 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:22.813 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:22.813 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:22.813 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:21:22.813 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:21:22.814 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:21:22.814 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:21:22.814 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:21:22.814 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:21:22.814 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:21:22.814 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:21:22.814 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:21:22.814 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.mJrZIIAAGY 00:21:22.814 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2695438 00:21:22.814 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2695438 /var/tmp/spdk-raid.sock 00:21:22.814 04:16:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:22.814 04:16:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2695438 ']' 00:21:22.814 04:16:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:22.814 04:16:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.814 04:16:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:22.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:22.814 04:16:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.814 04:16:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.814 [2024-07-23 04:16:31.487196] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:22.814 [2024-07-23 04:16:31.487318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2695438 ] 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:23.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.073 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:23.073 [2024-07-23 04:16:31.713988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.332 [2024-07-23 04:16:31.996266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.591 [2024-07-23 04:16:32.324538] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:23.591 [2024-07-23 04:16:32.324581] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:23.849 04:16:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.849 04:16:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:21:23.849 04:16:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:23.849 04:16:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:24.108 BaseBdev1_malloc 00:21:24.108 04:16:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:24.367 true 00:21:24.367 04:16:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:24.625 [2024-07-23 04:16:33.199769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:24.625 [2024-07-23 04:16:33.199834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.625 [2024-07-23 04:16:33.199861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:21:24.625 [2024-07-23 04:16:33.199884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.625 [2024-07-23 04:16:33.202655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.626 [2024-07-23 04:16:33.202695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:24.626 BaseBdev1 00:21:24.626 04:16:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:24.626 04:16:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:24.883 BaseBdev2_malloc 00:21:24.883 04:16:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:25.140 true 00:21:25.140 04:16:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:25.140 [2024-07-23 04:16:33.923819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:25.140 [2024-07-23 04:16:33.923880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.140 [2024-07-23 04:16:33.923906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:21:25.140 [2024-07-23 04:16:33.923928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.398 [2024-07-23 04:16:33.926639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.398 [2024-07-23 04:16:33.926675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:25.398 BaseBdev2 00:21:25.398 04:16:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:25.398 04:16:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:25.656 BaseBdev3_malloc 00:21:25.656 04:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:25.656 true 00:21:25.656 04:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:25.914 [2024-07-23 04:16:34.643417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:25.914 [2024-07-23 04:16:34.643469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.915 [2024-07-23 04:16:34.643493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:21:25.915 [2024-07-23 04:16:34.643510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.915 [2024-07-23 04:16:34.646224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.915 [2024-07-23 04:16:34.646258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:25.915 BaseBdev3 00:21:25.915 04:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:21:26.173 [2024-07-23 04:16:34.872101] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:26.173 [2024-07-23 04:16:34.874474] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:26.173 [2024-07-23 04:16:34.874563] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:26.173 [2024-07-23 04:16:34.874854] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041d80 00:21:26.173 [2024-07-23 04:16:34.874872] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:26.173 [2024-07-23 04:16:34.875225] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:21:26.173 [2024-07-23 04:16:34.875490] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041d80 00:21:26.174 [2024-07-23 04:16:34.875512] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041d80 00:21:26.174 [2024-07-23 04:16:34.875705] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:26.174 04:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:26.174 04:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:26.174 04:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:26.174 04:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:26.174 04:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:26.174 04:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:26.174 04:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:26.174 04:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:26.174 04:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:26.174 04:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:26.174 04:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.174 04:16:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.436 04:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:26.436 "name": "raid_bdev1", 00:21:26.436 "uuid": "09e430ce-154f-4449-a8c0-9eb7e18d3010", 00:21:26.436 "strip_size_kb": 0, 00:21:26.436 "state": "online", 00:21:26.436 "raid_level": "raid1", 00:21:26.436 "superblock": true, 00:21:26.436 "num_base_bdevs": 3, 00:21:26.436 "num_base_bdevs_discovered": 3, 00:21:26.436 "num_base_bdevs_operational": 3, 00:21:26.436 "base_bdevs_list": [ 00:21:26.436 { 00:21:26.436 "name": "BaseBdev1", 00:21:26.436 "uuid": "5cfb6e4c-2d59-596a-bf48-e70adb9819e1", 00:21:26.436 "is_configured": true, 00:21:26.436 "data_offset": 2048, 00:21:26.436 "data_size": 63488 00:21:26.436 }, 00:21:26.436 { 00:21:26.436 "name": "BaseBdev2", 00:21:26.436 "uuid": "9e39e41a-1826-5791-b5d0-f967a4798132", 00:21:26.436 "is_configured": true, 00:21:26.436 "data_offset": 2048, 00:21:26.436 "data_size": 63488 00:21:26.436 }, 00:21:26.436 { 00:21:26.436 "name": "BaseBdev3", 00:21:26.436 "uuid": "0b4c1f57-b276-59a9-89f8-79978195724a", 00:21:26.436 "is_configured": true, 00:21:26.436 "data_offset": 2048, 00:21:26.436 "data_size": 63488 00:21:26.436 } 00:21:26.436 ] 00:21:26.436 }' 00:21:26.436 04:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:26.436 04:16:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.003 04:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:27.003 04:16:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:21:27.003 [2024-07-23 04:16:35.764593] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:21:27.940 04:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:28.199 04:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:21:28.199 04:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:28.199 04:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:21:28.199 04:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:21:28.199 04:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:28.199 04:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:28.199 04:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:28.199 04:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:28.199 04:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:28.199 04:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:28.199 04:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:28.199 04:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:28.199 04:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:28.199 04:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:28.199 04:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.199 04:16:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.458 04:16:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:28.458 "name": "raid_bdev1", 00:21:28.458 "uuid": "09e430ce-154f-4449-a8c0-9eb7e18d3010", 00:21:28.458 "strip_size_kb": 0, 00:21:28.458 "state": "online", 00:21:28.458 "raid_level": "raid1", 00:21:28.458 "superblock": true, 00:21:28.458 "num_base_bdevs": 3, 00:21:28.458 "num_base_bdevs_discovered": 3, 00:21:28.458 "num_base_bdevs_operational": 3, 00:21:28.458 "base_bdevs_list": [ 00:21:28.458 { 00:21:28.458 "name": "BaseBdev1", 00:21:28.458 "uuid": "5cfb6e4c-2d59-596a-bf48-e70adb9819e1", 00:21:28.458 "is_configured": true, 00:21:28.458 "data_offset": 2048, 00:21:28.458 "data_size": 63488 00:21:28.458 }, 00:21:28.458 { 00:21:28.458 "name": "BaseBdev2", 00:21:28.458 "uuid": "9e39e41a-1826-5791-b5d0-f967a4798132", 00:21:28.458 "is_configured": true, 00:21:28.458 "data_offset": 2048, 00:21:28.458 "data_size": 63488 00:21:28.458 }, 00:21:28.458 { 00:21:28.458 "name": "BaseBdev3", 00:21:28.458 "uuid": "0b4c1f57-b276-59a9-89f8-79978195724a", 00:21:28.458 "is_configured": true, 00:21:28.458 "data_offset": 2048, 00:21:28.458 "data_size": 63488 00:21:28.458 } 00:21:28.458 ] 00:21:28.458 }' 00:21:28.458 04:16:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:28.458 04:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.026 04:16:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:29.285 [2024-07-23 04:16:37.921661] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:29.285 [2024-07-23 04:16:37.921705] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:29.285 [2024-07-23 04:16:37.925094] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:29.285 [2024-07-23 04:16:37.925160] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.285 [2024-07-23 04:16:37.925292] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:29.285 [2024-07-23 04:16:37.925310] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041d80 name raid_bdev1, state offline 00:21:29.285 0 00:21:29.285 04:16:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2695438 00:21:29.285 04:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2695438 ']' 00:21:29.285 04:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2695438 00:21:29.285 04:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:21:29.285 04:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:29.285 04:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2695438 00:21:29.285 04:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:29.285 04:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:29.285 04:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2695438' 00:21:29.285 killing process with pid 2695438 00:21:29.285 04:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2695438 00:21:29.285 [2024-07-23 04:16:37.994829] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:29.285 04:16:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2695438 00:21:29.544 [2024-07-23 04:16:38.223311] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:31.451 04:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.mJrZIIAAGY 00:21:31.451 04:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:21:31.451 04:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:21:31.451 04:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:21:31.451 04:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:21:31.451 04:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:31.451 04:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:31.451 04:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:31.451 00:21:31.451 real 0m8.623s 00:21:31.451 user 0m12.176s 00:21:31.451 sys 0m1.346s 00:21:31.451 04:16:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:31.451 04:16:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.451 ************************************ 00:21:31.451 END TEST raid_read_error_test 00:21:31.451 ************************************ 00:21:31.451 04:16:40 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:31.451 04:16:40 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:21:31.451 04:16:40 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:31.451 04:16:40 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:31.451 04:16:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:31.451 ************************************ 00:21:31.451 START TEST raid_write_error_test 00:21:31.451 ************************************ 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 write 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.6Cdaw6MUYo 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2696871 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2696871 /var/tmp/spdk-raid.sock 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2696871 ']' 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:31.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:31.451 04:16:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.711 [2024-07-23 04:16:40.289714] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:31.711 [2024-07-23 04:16:40.289977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2696871 ] 00:21:31.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.970 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:31.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.970 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:31.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:31.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.971 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:31.971 [2024-07-23 04:16:40.653519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.230 [2024-07-23 04:16:40.920569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.496 [2024-07-23 04:16:41.240291] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:32.496 [2024-07-23 04:16:41.240333] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:32.755 04:16:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:32.755 04:16:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:21:32.755 04:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:32.755 04:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:33.014 BaseBdev1_malloc 00:21:33.014 04:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:33.273 true 00:21:33.273 04:16:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:33.533 [2024-07-23 04:16:42.117015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:33.533 [2024-07-23 04:16:42.117076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.533 [2024-07-23 04:16:42.117103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:21:33.533 [2024-07-23 04:16:42.117125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.533 [2024-07-23 04:16:42.119926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.533 [2024-07-23 04:16:42.119966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:33.533 BaseBdev1 00:21:33.533 04:16:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:33.533 04:16:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:33.793 BaseBdev2_malloc 00:21:33.793 04:16:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:34.052 true 00:21:34.052 04:16:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:34.312 [2024-07-23 04:16:42.851254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:34.312 [2024-07-23 04:16:42.851311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.312 [2024-07-23 04:16:42.851338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:21:34.312 [2024-07-23 04:16:42.851358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.312 [2024-07-23 04:16:42.854133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.312 [2024-07-23 04:16:42.854178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:34.312 BaseBdev2 00:21:34.312 04:16:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:34.312 04:16:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:34.571 BaseBdev3_malloc 00:21:34.571 04:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:34.829 true 00:21:34.829 04:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:34.829 [2024-07-23 04:16:43.581968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:34.829 [2024-07-23 04:16:43.582027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.829 [2024-07-23 04:16:43.582056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:21:34.829 [2024-07-23 04:16:43.582074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.829 [2024-07-23 04:16:43.584873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.829 [2024-07-23 04:16:43.584910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:34.829 BaseBdev3 00:21:34.829 04:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:21:35.088 [2024-07-23 04:16:43.806609] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:35.088 [2024-07-23 04:16:43.808938] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:35.088 [2024-07-23 04:16:43.809028] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:35.088 [2024-07-23 04:16:43.809344] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041d80 00:21:35.088 [2024-07-23 04:16:43.809362] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:35.088 [2024-07-23 04:16:43.809681] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:21:35.088 [2024-07-23 04:16:43.809933] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041d80 00:21:35.088 [2024-07-23 04:16:43.809954] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041d80 00:21:35.088 [2024-07-23 04:16:43.810169] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.088 04:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:35.088 04:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:35.088 04:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:35.088 04:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:35.088 04:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:35.088 04:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:35.088 04:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:35.088 04:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:35.088 04:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:35.088 04:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:35.088 04:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.088 04:16:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.347 04:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:35.347 "name": "raid_bdev1", 00:21:35.347 "uuid": "0c72f666-1756-40b0-a3a7-eaf4fa2b1fd6", 00:21:35.347 "strip_size_kb": 0, 00:21:35.347 "state": "online", 00:21:35.347 "raid_level": "raid1", 00:21:35.347 "superblock": true, 00:21:35.347 "num_base_bdevs": 3, 00:21:35.347 "num_base_bdevs_discovered": 3, 00:21:35.347 "num_base_bdevs_operational": 3, 00:21:35.347 "base_bdevs_list": [ 00:21:35.347 { 00:21:35.347 "name": "BaseBdev1", 00:21:35.347 "uuid": "cf379014-49ff-5111-bdb4-56332e194840", 00:21:35.347 "is_configured": true, 00:21:35.347 "data_offset": 2048, 00:21:35.347 "data_size": 63488 00:21:35.347 }, 00:21:35.347 { 00:21:35.347 "name": "BaseBdev2", 00:21:35.347 "uuid": "cb59792b-ce19-593f-a2ab-a491d7eedd0a", 00:21:35.347 "is_configured": true, 00:21:35.347 "data_offset": 2048, 00:21:35.347 "data_size": 63488 00:21:35.347 }, 00:21:35.347 { 00:21:35.347 "name": "BaseBdev3", 00:21:35.347 "uuid": "3903d356-982e-5bc9-9337-a4aa4f07ce37", 00:21:35.347 "is_configured": true, 00:21:35.347 "data_offset": 2048, 00:21:35.347 "data_size": 63488 00:21:35.347 } 00:21:35.347 ] 00:21:35.347 }' 00:21:35.347 04:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:35.347 04:16:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.914 04:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:21:35.914 04:16:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:36.174 [2024-07-23 04:16:44.719105] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:21:37.113 04:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:37.113 [2024-07-23 04:16:45.833748] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:21:37.113 [2024-07-23 04:16:45.833807] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:37.113 [2024-07-23 04:16:45.834040] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000107e0 00:21:37.113 04:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:21:37.113 04:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:37.113 04:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:21:37.113 04:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:21:37.113 04:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:37.113 04:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:37.113 04:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:37.113 04:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:37.113 04:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:37.113 04:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:37.113 04:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:37.113 04:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:37.113 04:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:37.113 04:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:37.113 04:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.113 04:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.372 04:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:37.372 "name": "raid_bdev1", 00:21:37.372 "uuid": "0c72f666-1756-40b0-a3a7-eaf4fa2b1fd6", 00:21:37.372 "strip_size_kb": 0, 00:21:37.372 "state": "online", 00:21:37.372 "raid_level": "raid1", 00:21:37.372 "superblock": true, 00:21:37.372 "num_base_bdevs": 3, 00:21:37.372 "num_base_bdevs_discovered": 2, 00:21:37.372 "num_base_bdevs_operational": 2, 00:21:37.372 "base_bdevs_list": [ 00:21:37.372 { 00:21:37.372 "name": null, 00:21:37.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.372 "is_configured": false, 00:21:37.372 "data_offset": 2048, 00:21:37.372 "data_size": 63488 00:21:37.372 }, 00:21:37.372 { 00:21:37.372 "name": "BaseBdev2", 00:21:37.372 "uuid": "cb59792b-ce19-593f-a2ab-a491d7eedd0a", 00:21:37.372 "is_configured": true, 00:21:37.372 "data_offset": 2048, 00:21:37.372 "data_size": 63488 00:21:37.372 }, 00:21:37.372 { 00:21:37.372 "name": "BaseBdev3", 00:21:37.372 "uuid": "3903d356-982e-5bc9-9337-a4aa4f07ce37", 00:21:37.372 "is_configured": true, 00:21:37.372 "data_offset": 2048, 00:21:37.372 "data_size": 63488 00:21:37.372 } 00:21:37.372 ] 00:21:37.372 }' 00:21:37.372 04:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:37.372 04:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.941 04:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:38.200 [2024-07-23 04:16:46.849223] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:38.201 [2024-07-23 04:16:46.849268] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:38.201 [2024-07-23 04:16:46.852514] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:38.201 [2024-07-23 04:16:46.852565] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:38.201 [2024-07-23 04:16:46.852665] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:38.201 [2024-07-23 04:16:46.852684] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041d80 name raid_bdev1, state offline 00:21:38.201 0 00:21:38.201 04:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2696871 00:21:38.201 04:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2696871 ']' 00:21:38.201 04:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2696871 00:21:38.201 04:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:21:38.201 04:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:38.201 04:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2696871 00:21:38.201 04:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:38.201 04:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:38.201 04:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2696871' 00:21:38.201 killing process with pid 2696871 00:21:38.201 04:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2696871 00:21:38.201 [2024-07-23 04:16:46.925377] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:38.201 04:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2696871 00:21:38.460 [2024-07-23 04:16:47.163384] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:40.366 04:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.6Cdaw6MUYo 00:21:40.366 04:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:21:40.366 04:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:21:40.366 04:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:21:40.366 04:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:21:40.366 04:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:40.366 04:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:40.366 04:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:40.366 00:21:40.366 real 0m8.926s 00:21:40.366 user 0m12.389s 00:21:40.366 sys 0m1.475s 00:21:40.366 04:16:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:40.366 04:16:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.366 ************************************ 00:21:40.366 END TEST raid_write_error_test 00:21:40.366 ************************************ 00:21:40.366 04:16:49 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:40.366 04:16:49 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:21:40.366 04:16:49 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:21:40.367 04:16:49 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:21:40.367 04:16:49 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:40.367 04:16:49 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:40.367 04:16:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:40.367 ************************************ 00:21:40.367 START TEST raid_state_function_test 00:21:40.367 ************************************ 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 false 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2698532 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2698532' 00:21:40.367 Process raid pid: 2698532 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2698532 /var/tmp/spdk-raid.sock 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2698532 ']' 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:40.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:40.367 04:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.627 [2024-07-23 04:16:49.188664] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:40.627 [2024-07-23 04:16:49.188779] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:40.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:40.627 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:40.886 [2024-07-23 04:16:49.415640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.146 [2024-07-23 04:16:49.685616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.405 [2024-07-23 04:16:50.018190] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:41.405 [2024-07-23 04:16:50.018226] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:41.665 04:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:41.665 04:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:21:41.665 04:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:41.665 [2024-07-23 04:16:50.415422] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:41.665 [2024-07-23 04:16:50.415477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:41.665 [2024-07-23 04:16:50.415492] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:41.665 [2024-07-23 04:16:50.415509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:41.665 [2024-07-23 04:16:50.415520] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:41.665 [2024-07-23 04:16:50.415536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:41.665 [2024-07-23 04:16:50.415548] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:41.665 [2024-07-23 04:16:50.415563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:41.665 04:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:41.665 04:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:41.665 04:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:41.665 04:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:41.665 04:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:41.665 04:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:41.665 04:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:41.665 04:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:41.665 04:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:41.665 04:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:41.665 04:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.665 04:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:41.924 04:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:41.924 "name": "Existed_Raid", 00:21:41.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.924 "strip_size_kb": 64, 00:21:41.924 "state": "configuring", 00:21:41.924 "raid_level": "raid0", 00:21:41.924 "superblock": false, 00:21:41.924 "num_base_bdevs": 4, 00:21:41.924 "num_base_bdevs_discovered": 0, 00:21:41.924 "num_base_bdevs_operational": 4, 00:21:41.924 "base_bdevs_list": [ 00:21:41.924 { 00:21:41.924 "name": "BaseBdev1", 00:21:41.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.924 "is_configured": false, 00:21:41.924 "data_offset": 0, 00:21:41.924 "data_size": 0 00:21:41.924 }, 00:21:41.924 { 00:21:41.924 "name": "BaseBdev2", 00:21:41.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.924 "is_configured": false, 00:21:41.924 "data_offset": 0, 00:21:41.924 "data_size": 0 00:21:41.924 }, 00:21:41.924 { 00:21:41.924 "name": "BaseBdev3", 00:21:41.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.924 "is_configured": false, 00:21:41.924 "data_offset": 0, 00:21:41.924 "data_size": 0 00:21:41.924 }, 00:21:41.924 { 00:21:41.924 "name": "BaseBdev4", 00:21:41.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.924 "is_configured": false, 00:21:41.924 "data_offset": 0, 00:21:41.924 "data_size": 0 00:21:41.924 } 00:21:41.924 ] 00:21:41.924 }' 00:21:41.924 04:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:41.924 04:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.492 04:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:42.751 [2024-07-23 04:16:51.434289] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:42.751 [2024-07-23 04:16:51.434331] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:21:42.751 04:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:43.010 [2024-07-23 04:16:51.658953] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:43.010 [2024-07-23 04:16:51.659004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:43.010 [2024-07-23 04:16:51.659019] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:43.010 [2024-07-23 04:16:51.659043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:43.010 [2024-07-23 04:16:51.659055] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:43.010 [2024-07-23 04:16:51.659071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:43.010 [2024-07-23 04:16:51.659082] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:43.010 [2024-07-23 04:16:51.659098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:43.010 04:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:43.270 [2024-07-23 04:16:51.939477] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:43.270 BaseBdev1 00:21:43.270 04:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:43.270 04:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:43.270 04:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:43.270 04:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:43.270 04:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:43.270 04:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:43.270 04:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:43.529 04:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:43.788 [ 00:21:43.789 { 00:21:43.789 "name": "BaseBdev1", 00:21:43.789 "aliases": [ 00:21:43.789 "b2a98209-4fb6-4b3d-9041-cd1e5cd3ee04" 00:21:43.789 ], 00:21:43.789 "product_name": "Malloc disk", 00:21:43.789 "block_size": 512, 00:21:43.789 "num_blocks": 65536, 00:21:43.789 "uuid": "b2a98209-4fb6-4b3d-9041-cd1e5cd3ee04", 00:21:43.789 "assigned_rate_limits": { 00:21:43.789 "rw_ios_per_sec": 0, 00:21:43.789 "rw_mbytes_per_sec": 0, 00:21:43.789 "r_mbytes_per_sec": 0, 00:21:43.789 "w_mbytes_per_sec": 0 00:21:43.789 }, 00:21:43.789 "claimed": true, 00:21:43.789 "claim_type": "exclusive_write", 00:21:43.789 "zoned": false, 00:21:43.789 "supported_io_types": { 00:21:43.789 "read": true, 00:21:43.789 "write": true, 00:21:43.789 "unmap": true, 00:21:43.789 "flush": true, 00:21:43.789 "reset": true, 00:21:43.789 "nvme_admin": false, 00:21:43.789 "nvme_io": false, 00:21:43.789 "nvme_io_md": false, 00:21:43.789 "write_zeroes": true, 00:21:43.789 "zcopy": true, 00:21:43.789 "get_zone_info": false, 00:21:43.789 "zone_management": false, 00:21:43.789 "zone_append": false, 00:21:43.789 "compare": false, 00:21:43.789 "compare_and_write": false, 00:21:43.789 "abort": true, 00:21:43.789 "seek_hole": false, 00:21:43.789 "seek_data": false, 00:21:43.789 "copy": true, 00:21:43.789 "nvme_iov_md": false 00:21:43.789 }, 00:21:43.789 "memory_domains": [ 00:21:43.789 { 00:21:43.789 "dma_device_id": "system", 00:21:43.789 "dma_device_type": 1 00:21:43.789 }, 00:21:43.789 { 00:21:43.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.789 "dma_device_type": 2 00:21:43.789 } 00:21:43.789 ], 00:21:43.789 "driver_specific": {} 00:21:43.789 } 00:21:43.789 ] 00:21:43.789 04:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:43.789 04:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:43.789 04:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:43.789 04:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:43.789 04:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:43.789 04:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:43.789 04:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:43.789 04:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:43.789 04:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:43.789 04:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:43.789 04:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:43.789 04:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.789 04:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:44.048 04:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:44.048 "name": "Existed_Raid", 00:21:44.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.048 "strip_size_kb": 64, 00:21:44.048 "state": "configuring", 00:21:44.049 "raid_level": "raid0", 00:21:44.049 "superblock": false, 00:21:44.049 "num_base_bdevs": 4, 00:21:44.049 "num_base_bdevs_discovered": 1, 00:21:44.049 "num_base_bdevs_operational": 4, 00:21:44.049 "base_bdevs_list": [ 00:21:44.049 { 00:21:44.049 "name": "BaseBdev1", 00:21:44.049 "uuid": "b2a98209-4fb6-4b3d-9041-cd1e5cd3ee04", 00:21:44.049 "is_configured": true, 00:21:44.049 "data_offset": 0, 00:21:44.049 "data_size": 65536 00:21:44.049 }, 00:21:44.049 { 00:21:44.049 "name": "BaseBdev2", 00:21:44.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.049 "is_configured": false, 00:21:44.049 "data_offset": 0, 00:21:44.049 "data_size": 0 00:21:44.049 }, 00:21:44.049 { 00:21:44.049 "name": "BaseBdev3", 00:21:44.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.049 "is_configured": false, 00:21:44.049 "data_offset": 0, 00:21:44.049 "data_size": 0 00:21:44.049 }, 00:21:44.049 { 00:21:44.049 "name": "BaseBdev4", 00:21:44.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.049 "is_configured": false, 00:21:44.049 "data_offset": 0, 00:21:44.049 "data_size": 0 00:21:44.049 } 00:21:44.049 ] 00:21:44.049 }' 00:21:44.049 04:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:44.049 04:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.619 04:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:44.619 [2024-07-23 04:16:53.391447] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:44.619 [2024-07-23 04:16:53.391504] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:21:44.913 04:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:44.913 [2024-07-23 04:16:53.564010] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:44.913 [2024-07-23 04:16:53.566341] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:44.913 [2024-07-23 04:16:53.566384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:44.913 [2024-07-23 04:16:53.566398] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:44.913 [2024-07-23 04:16:53.566415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:44.913 [2024-07-23 04:16:53.566427] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:44.913 [2024-07-23 04:16:53.566446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:44.913 04:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:44.913 04:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:44.913 04:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:44.913 04:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:44.913 04:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:44.913 04:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:44.913 04:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:44.913 04:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:44.913 04:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:44.913 04:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:44.913 04:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:44.913 04:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:44.913 04:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.913 04:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.172 04:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:45.172 "name": "Existed_Raid", 00:21:45.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.172 "strip_size_kb": 64, 00:21:45.172 "state": "configuring", 00:21:45.172 "raid_level": "raid0", 00:21:45.172 "superblock": false, 00:21:45.172 "num_base_bdevs": 4, 00:21:45.172 "num_base_bdevs_discovered": 1, 00:21:45.172 "num_base_bdevs_operational": 4, 00:21:45.172 "base_bdevs_list": [ 00:21:45.172 { 00:21:45.172 "name": "BaseBdev1", 00:21:45.172 "uuid": "b2a98209-4fb6-4b3d-9041-cd1e5cd3ee04", 00:21:45.172 "is_configured": true, 00:21:45.172 "data_offset": 0, 00:21:45.172 "data_size": 65536 00:21:45.172 }, 00:21:45.172 { 00:21:45.172 "name": "BaseBdev2", 00:21:45.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.172 "is_configured": false, 00:21:45.172 "data_offset": 0, 00:21:45.172 "data_size": 0 00:21:45.172 }, 00:21:45.172 { 00:21:45.172 "name": "BaseBdev3", 00:21:45.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.172 "is_configured": false, 00:21:45.172 "data_offset": 0, 00:21:45.172 "data_size": 0 00:21:45.172 }, 00:21:45.172 { 00:21:45.172 "name": "BaseBdev4", 00:21:45.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.172 "is_configured": false, 00:21:45.172 "data_offset": 0, 00:21:45.172 "data_size": 0 00:21:45.172 } 00:21:45.172 ] 00:21:45.172 }' 00:21:45.172 04:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:45.172 04:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.740 04:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:45.740 [2024-07-23 04:16:54.482289] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:45.740 BaseBdev2 00:21:45.740 04:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:45.740 04:16:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:45.740 04:16:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:45.740 04:16:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:45.740 04:16:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:45.740 04:16:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:45.740 04:16:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:45.999 04:16:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:46.258 [ 00:21:46.258 { 00:21:46.258 "name": "BaseBdev2", 00:21:46.258 "aliases": [ 00:21:46.258 "563ea8bc-d141-4e7a-a607-ebd916b593fc" 00:21:46.258 ], 00:21:46.258 "product_name": "Malloc disk", 00:21:46.258 "block_size": 512, 00:21:46.258 "num_blocks": 65536, 00:21:46.258 "uuid": "563ea8bc-d141-4e7a-a607-ebd916b593fc", 00:21:46.258 "assigned_rate_limits": { 00:21:46.258 "rw_ios_per_sec": 0, 00:21:46.258 "rw_mbytes_per_sec": 0, 00:21:46.258 "r_mbytes_per_sec": 0, 00:21:46.258 "w_mbytes_per_sec": 0 00:21:46.258 }, 00:21:46.258 "claimed": true, 00:21:46.258 "claim_type": "exclusive_write", 00:21:46.258 "zoned": false, 00:21:46.258 "supported_io_types": { 00:21:46.258 "read": true, 00:21:46.258 "write": true, 00:21:46.258 "unmap": true, 00:21:46.258 "flush": true, 00:21:46.258 "reset": true, 00:21:46.258 "nvme_admin": false, 00:21:46.258 "nvme_io": false, 00:21:46.258 "nvme_io_md": false, 00:21:46.258 "write_zeroes": true, 00:21:46.258 "zcopy": true, 00:21:46.258 "get_zone_info": false, 00:21:46.258 "zone_management": false, 00:21:46.258 "zone_append": false, 00:21:46.258 "compare": false, 00:21:46.258 "compare_and_write": false, 00:21:46.258 "abort": true, 00:21:46.258 "seek_hole": false, 00:21:46.258 "seek_data": false, 00:21:46.258 "copy": true, 00:21:46.258 "nvme_iov_md": false 00:21:46.258 }, 00:21:46.258 "memory_domains": [ 00:21:46.258 { 00:21:46.258 "dma_device_id": "system", 00:21:46.258 "dma_device_type": 1 00:21:46.258 }, 00:21:46.258 { 00:21:46.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.258 "dma_device_type": 2 00:21:46.258 } 00:21:46.258 ], 00:21:46.258 "driver_specific": {} 00:21:46.258 } 00:21:46.258 ] 00:21:46.258 04:16:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:46.258 04:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:46.258 04:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:46.258 04:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:46.258 04:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:46.258 04:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:46.258 04:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:46.258 04:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:46.258 04:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:46.258 04:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:46.258 04:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:46.258 04:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:46.258 04:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:46.258 04:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.258 04:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:46.518 04:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:46.518 "name": "Existed_Raid", 00:21:46.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.518 "strip_size_kb": 64, 00:21:46.518 "state": "configuring", 00:21:46.518 "raid_level": "raid0", 00:21:46.518 "superblock": false, 00:21:46.518 "num_base_bdevs": 4, 00:21:46.518 "num_base_bdevs_discovered": 2, 00:21:46.518 "num_base_bdevs_operational": 4, 00:21:46.518 "base_bdevs_list": [ 00:21:46.518 { 00:21:46.518 "name": "BaseBdev1", 00:21:46.518 "uuid": "b2a98209-4fb6-4b3d-9041-cd1e5cd3ee04", 00:21:46.518 "is_configured": true, 00:21:46.518 "data_offset": 0, 00:21:46.518 "data_size": 65536 00:21:46.518 }, 00:21:46.518 { 00:21:46.518 "name": "BaseBdev2", 00:21:46.518 "uuid": "563ea8bc-d141-4e7a-a607-ebd916b593fc", 00:21:46.518 "is_configured": true, 00:21:46.518 "data_offset": 0, 00:21:46.518 "data_size": 65536 00:21:46.518 }, 00:21:46.518 { 00:21:46.518 "name": "BaseBdev3", 00:21:46.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.518 "is_configured": false, 00:21:46.518 "data_offset": 0, 00:21:46.518 "data_size": 0 00:21:46.518 }, 00:21:46.518 { 00:21:46.518 "name": "BaseBdev4", 00:21:46.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.518 "is_configured": false, 00:21:46.518 "data_offset": 0, 00:21:46.518 "data_size": 0 00:21:46.518 } 00:21:46.518 ] 00:21:46.518 }' 00:21:46.518 04:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:46.518 04:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.085 04:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:47.344 [2024-07-23 04:16:56.024574] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:47.344 BaseBdev3 00:21:47.344 04:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:47.344 04:16:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:47.344 04:16:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:47.344 04:16:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:47.344 04:16:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:47.344 04:16:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:47.344 04:16:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:47.602 04:16:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:47.861 [ 00:21:47.861 { 00:21:47.861 "name": "BaseBdev3", 00:21:47.861 "aliases": [ 00:21:47.861 "74533171-43ca-4629-b0dc-d74bb5da0417" 00:21:47.861 ], 00:21:47.861 "product_name": "Malloc disk", 00:21:47.861 "block_size": 512, 00:21:47.861 "num_blocks": 65536, 00:21:47.861 "uuid": "74533171-43ca-4629-b0dc-d74bb5da0417", 00:21:47.861 "assigned_rate_limits": { 00:21:47.861 "rw_ios_per_sec": 0, 00:21:47.861 "rw_mbytes_per_sec": 0, 00:21:47.861 "r_mbytes_per_sec": 0, 00:21:47.861 "w_mbytes_per_sec": 0 00:21:47.861 }, 00:21:47.861 "claimed": true, 00:21:47.861 "claim_type": "exclusive_write", 00:21:47.861 "zoned": false, 00:21:47.861 "supported_io_types": { 00:21:47.861 "read": true, 00:21:47.861 "write": true, 00:21:47.861 "unmap": true, 00:21:47.861 "flush": true, 00:21:47.861 "reset": true, 00:21:47.861 "nvme_admin": false, 00:21:47.861 "nvme_io": false, 00:21:47.861 "nvme_io_md": false, 00:21:47.861 "write_zeroes": true, 00:21:47.861 "zcopy": true, 00:21:47.861 "get_zone_info": false, 00:21:47.861 "zone_management": false, 00:21:47.861 "zone_append": false, 00:21:47.861 "compare": false, 00:21:47.861 "compare_and_write": false, 00:21:47.861 "abort": true, 00:21:47.861 "seek_hole": false, 00:21:47.861 "seek_data": false, 00:21:47.861 "copy": true, 00:21:47.861 "nvme_iov_md": false 00:21:47.861 }, 00:21:47.861 "memory_domains": [ 00:21:47.861 { 00:21:47.861 "dma_device_id": "system", 00:21:47.861 "dma_device_type": 1 00:21:47.861 }, 00:21:47.861 { 00:21:47.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.861 "dma_device_type": 2 00:21:47.861 } 00:21:47.861 ], 00:21:47.861 "driver_specific": {} 00:21:47.861 } 00:21:47.861 ] 00:21:47.861 04:16:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:47.861 04:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:47.861 04:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:47.861 04:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:47.861 04:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:47.861 04:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:47.861 04:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:47.861 04:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:47.861 04:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:47.861 04:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:47.861 04:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:47.861 04:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:47.861 04:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:47.861 04:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.861 04:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:48.121 04:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:48.121 "name": "Existed_Raid", 00:21:48.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.121 "strip_size_kb": 64, 00:21:48.121 "state": "configuring", 00:21:48.121 "raid_level": "raid0", 00:21:48.121 "superblock": false, 00:21:48.121 "num_base_bdevs": 4, 00:21:48.121 "num_base_bdevs_discovered": 3, 00:21:48.121 "num_base_bdevs_operational": 4, 00:21:48.121 "base_bdevs_list": [ 00:21:48.121 { 00:21:48.121 "name": "BaseBdev1", 00:21:48.121 "uuid": "b2a98209-4fb6-4b3d-9041-cd1e5cd3ee04", 00:21:48.121 "is_configured": true, 00:21:48.121 "data_offset": 0, 00:21:48.121 "data_size": 65536 00:21:48.121 }, 00:21:48.121 { 00:21:48.121 "name": "BaseBdev2", 00:21:48.121 "uuid": "563ea8bc-d141-4e7a-a607-ebd916b593fc", 00:21:48.121 "is_configured": true, 00:21:48.121 "data_offset": 0, 00:21:48.121 "data_size": 65536 00:21:48.121 }, 00:21:48.121 { 00:21:48.121 "name": "BaseBdev3", 00:21:48.121 "uuid": "74533171-43ca-4629-b0dc-d74bb5da0417", 00:21:48.121 "is_configured": true, 00:21:48.121 "data_offset": 0, 00:21:48.121 "data_size": 65536 00:21:48.121 }, 00:21:48.121 { 00:21:48.121 "name": "BaseBdev4", 00:21:48.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.121 "is_configured": false, 00:21:48.121 "data_offset": 0, 00:21:48.121 "data_size": 0 00:21:48.121 } 00:21:48.121 ] 00:21:48.121 }' 00:21:48.121 04:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:48.121 04:16:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.688 04:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:48.947 [2024-07-23 04:16:57.575439] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:48.947 [2024-07-23 04:16:57.575486] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:21:48.947 [2024-07-23 04:16:57.575504] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:48.947 [2024-07-23 04:16:57.575834] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:21:48.947 [2024-07-23 04:16:57.576063] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:21:48.947 [2024-07-23 04:16:57.576081] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:21:48.947 [2024-07-23 04:16:57.576402] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.947 BaseBdev4 00:21:48.947 04:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:21:48.947 04:16:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:21:48.947 04:16:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:48.947 04:16:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:48.947 04:16:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:48.947 04:16:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:48.947 04:16:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:49.205 04:16:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:49.490 [ 00:21:49.491 { 00:21:49.491 "name": "BaseBdev4", 00:21:49.491 "aliases": [ 00:21:49.491 "c142c2e6-fba5-449f-b193-1ee83702a2e5" 00:21:49.491 ], 00:21:49.491 "product_name": "Malloc disk", 00:21:49.491 "block_size": 512, 00:21:49.491 "num_blocks": 65536, 00:21:49.491 "uuid": "c142c2e6-fba5-449f-b193-1ee83702a2e5", 00:21:49.491 "assigned_rate_limits": { 00:21:49.491 "rw_ios_per_sec": 0, 00:21:49.491 "rw_mbytes_per_sec": 0, 00:21:49.491 "r_mbytes_per_sec": 0, 00:21:49.491 "w_mbytes_per_sec": 0 00:21:49.491 }, 00:21:49.491 "claimed": true, 00:21:49.491 "claim_type": "exclusive_write", 00:21:49.491 "zoned": false, 00:21:49.491 "supported_io_types": { 00:21:49.491 "read": true, 00:21:49.491 "write": true, 00:21:49.491 "unmap": true, 00:21:49.491 "flush": true, 00:21:49.491 "reset": true, 00:21:49.491 "nvme_admin": false, 00:21:49.491 "nvme_io": false, 00:21:49.491 "nvme_io_md": false, 00:21:49.491 "write_zeroes": true, 00:21:49.491 "zcopy": true, 00:21:49.491 "get_zone_info": false, 00:21:49.491 "zone_management": false, 00:21:49.491 "zone_append": false, 00:21:49.491 "compare": false, 00:21:49.491 "compare_and_write": false, 00:21:49.491 "abort": true, 00:21:49.491 "seek_hole": false, 00:21:49.491 "seek_data": false, 00:21:49.491 "copy": true, 00:21:49.491 "nvme_iov_md": false 00:21:49.491 }, 00:21:49.491 "memory_domains": [ 00:21:49.491 { 00:21:49.491 "dma_device_id": "system", 00:21:49.491 "dma_device_type": 1 00:21:49.491 }, 00:21:49.491 { 00:21:49.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.491 "dma_device_type": 2 00:21:49.491 } 00:21:49.491 ], 00:21:49.491 "driver_specific": {} 00:21:49.491 } 00:21:49.491 ] 00:21:49.491 04:16:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:49.491 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:49.491 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:49.491 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:49.491 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:49.491 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:49.491 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:49.491 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:49.491 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:49.491 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:49.491 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:49.491 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:49.491 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:49.491 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.491 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:49.750 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:49.750 "name": "Existed_Raid", 00:21:49.750 "uuid": "af4c9bb3-f4b4-4033-9b12-6261a0ac3c36", 00:21:49.750 "strip_size_kb": 64, 00:21:49.750 "state": "online", 00:21:49.750 "raid_level": "raid0", 00:21:49.750 "superblock": false, 00:21:49.750 "num_base_bdevs": 4, 00:21:49.750 "num_base_bdevs_discovered": 4, 00:21:49.750 "num_base_bdevs_operational": 4, 00:21:49.750 "base_bdevs_list": [ 00:21:49.750 { 00:21:49.750 "name": "BaseBdev1", 00:21:49.750 "uuid": "b2a98209-4fb6-4b3d-9041-cd1e5cd3ee04", 00:21:49.750 "is_configured": true, 00:21:49.750 "data_offset": 0, 00:21:49.750 "data_size": 65536 00:21:49.750 }, 00:21:49.750 { 00:21:49.750 "name": "BaseBdev2", 00:21:49.750 "uuid": "563ea8bc-d141-4e7a-a607-ebd916b593fc", 00:21:49.750 "is_configured": true, 00:21:49.750 "data_offset": 0, 00:21:49.750 "data_size": 65536 00:21:49.750 }, 00:21:49.750 { 00:21:49.750 "name": "BaseBdev3", 00:21:49.750 "uuid": "74533171-43ca-4629-b0dc-d74bb5da0417", 00:21:49.750 "is_configured": true, 00:21:49.750 "data_offset": 0, 00:21:49.750 "data_size": 65536 00:21:49.750 }, 00:21:49.750 { 00:21:49.750 "name": "BaseBdev4", 00:21:49.750 "uuid": "c142c2e6-fba5-449f-b193-1ee83702a2e5", 00:21:49.750 "is_configured": true, 00:21:49.750 "data_offset": 0, 00:21:49.750 "data_size": 65536 00:21:49.750 } 00:21:49.750 ] 00:21:49.750 }' 00:21:49.750 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:49.750 04:16:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.318 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:50.318 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:50.318 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:50.318 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:50.318 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:50.318 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:50.318 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:50.318 04:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:50.318 [2024-07-23 04:16:59.067965] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:50.318 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:50.318 "name": "Existed_Raid", 00:21:50.318 "aliases": [ 00:21:50.318 "af4c9bb3-f4b4-4033-9b12-6261a0ac3c36" 00:21:50.318 ], 00:21:50.318 "product_name": "Raid Volume", 00:21:50.318 "block_size": 512, 00:21:50.318 "num_blocks": 262144, 00:21:50.318 "uuid": "af4c9bb3-f4b4-4033-9b12-6261a0ac3c36", 00:21:50.318 "assigned_rate_limits": { 00:21:50.318 "rw_ios_per_sec": 0, 00:21:50.318 "rw_mbytes_per_sec": 0, 00:21:50.318 "r_mbytes_per_sec": 0, 00:21:50.318 "w_mbytes_per_sec": 0 00:21:50.318 }, 00:21:50.318 "claimed": false, 00:21:50.318 "zoned": false, 00:21:50.318 "supported_io_types": { 00:21:50.318 "read": true, 00:21:50.318 "write": true, 00:21:50.318 "unmap": true, 00:21:50.318 "flush": true, 00:21:50.318 "reset": true, 00:21:50.318 "nvme_admin": false, 00:21:50.318 "nvme_io": false, 00:21:50.318 "nvme_io_md": false, 00:21:50.318 "write_zeroes": true, 00:21:50.318 "zcopy": false, 00:21:50.318 "get_zone_info": false, 00:21:50.318 "zone_management": false, 00:21:50.318 "zone_append": false, 00:21:50.318 "compare": false, 00:21:50.318 "compare_and_write": false, 00:21:50.318 "abort": false, 00:21:50.318 "seek_hole": false, 00:21:50.318 "seek_data": false, 00:21:50.318 "copy": false, 00:21:50.318 "nvme_iov_md": false 00:21:50.318 }, 00:21:50.318 "memory_domains": [ 00:21:50.318 { 00:21:50.318 "dma_device_id": "system", 00:21:50.318 "dma_device_type": 1 00:21:50.318 }, 00:21:50.318 { 00:21:50.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.318 "dma_device_type": 2 00:21:50.318 }, 00:21:50.318 { 00:21:50.318 "dma_device_id": "system", 00:21:50.318 "dma_device_type": 1 00:21:50.318 }, 00:21:50.318 { 00:21:50.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.318 "dma_device_type": 2 00:21:50.318 }, 00:21:50.318 { 00:21:50.318 "dma_device_id": "system", 00:21:50.318 "dma_device_type": 1 00:21:50.318 }, 00:21:50.318 { 00:21:50.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.318 "dma_device_type": 2 00:21:50.318 }, 00:21:50.318 { 00:21:50.318 "dma_device_id": "system", 00:21:50.318 "dma_device_type": 1 00:21:50.318 }, 00:21:50.318 { 00:21:50.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.318 "dma_device_type": 2 00:21:50.318 } 00:21:50.318 ], 00:21:50.318 "driver_specific": { 00:21:50.318 "raid": { 00:21:50.318 "uuid": "af4c9bb3-f4b4-4033-9b12-6261a0ac3c36", 00:21:50.318 "strip_size_kb": 64, 00:21:50.318 "state": "online", 00:21:50.318 "raid_level": "raid0", 00:21:50.318 "superblock": false, 00:21:50.318 "num_base_bdevs": 4, 00:21:50.318 "num_base_bdevs_discovered": 4, 00:21:50.318 "num_base_bdevs_operational": 4, 00:21:50.318 "base_bdevs_list": [ 00:21:50.318 { 00:21:50.318 "name": "BaseBdev1", 00:21:50.318 "uuid": "b2a98209-4fb6-4b3d-9041-cd1e5cd3ee04", 00:21:50.318 "is_configured": true, 00:21:50.318 "data_offset": 0, 00:21:50.318 "data_size": 65536 00:21:50.318 }, 00:21:50.318 { 00:21:50.318 "name": "BaseBdev2", 00:21:50.318 "uuid": "563ea8bc-d141-4e7a-a607-ebd916b593fc", 00:21:50.318 "is_configured": true, 00:21:50.318 "data_offset": 0, 00:21:50.318 "data_size": 65536 00:21:50.318 }, 00:21:50.318 { 00:21:50.318 "name": "BaseBdev3", 00:21:50.318 "uuid": "74533171-43ca-4629-b0dc-d74bb5da0417", 00:21:50.318 "is_configured": true, 00:21:50.318 "data_offset": 0, 00:21:50.318 "data_size": 65536 00:21:50.318 }, 00:21:50.318 { 00:21:50.318 "name": "BaseBdev4", 00:21:50.318 "uuid": "c142c2e6-fba5-449f-b193-1ee83702a2e5", 00:21:50.318 "is_configured": true, 00:21:50.318 "data_offset": 0, 00:21:50.318 "data_size": 65536 00:21:50.318 } 00:21:50.318 ] 00:21:50.318 } 00:21:50.318 } 00:21:50.318 }' 00:21:50.318 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:50.577 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:50.577 BaseBdev2 00:21:50.577 BaseBdev3 00:21:50.577 BaseBdev4' 00:21:50.577 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:50.577 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:50.577 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:50.577 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:50.577 "name": "BaseBdev1", 00:21:50.577 "aliases": [ 00:21:50.577 "b2a98209-4fb6-4b3d-9041-cd1e5cd3ee04" 00:21:50.577 ], 00:21:50.577 "product_name": "Malloc disk", 00:21:50.577 "block_size": 512, 00:21:50.577 "num_blocks": 65536, 00:21:50.577 "uuid": "b2a98209-4fb6-4b3d-9041-cd1e5cd3ee04", 00:21:50.577 "assigned_rate_limits": { 00:21:50.577 "rw_ios_per_sec": 0, 00:21:50.577 "rw_mbytes_per_sec": 0, 00:21:50.577 "r_mbytes_per_sec": 0, 00:21:50.577 "w_mbytes_per_sec": 0 00:21:50.577 }, 00:21:50.577 "claimed": true, 00:21:50.577 "claim_type": "exclusive_write", 00:21:50.577 "zoned": false, 00:21:50.577 "supported_io_types": { 00:21:50.577 "read": true, 00:21:50.577 "write": true, 00:21:50.577 "unmap": true, 00:21:50.577 "flush": true, 00:21:50.577 "reset": true, 00:21:50.577 "nvme_admin": false, 00:21:50.577 "nvme_io": false, 00:21:50.577 "nvme_io_md": false, 00:21:50.577 "write_zeroes": true, 00:21:50.577 "zcopy": true, 00:21:50.577 "get_zone_info": false, 00:21:50.577 "zone_management": false, 00:21:50.577 "zone_append": false, 00:21:50.577 "compare": false, 00:21:50.577 "compare_and_write": false, 00:21:50.577 "abort": true, 00:21:50.577 "seek_hole": false, 00:21:50.577 "seek_data": false, 00:21:50.577 "copy": true, 00:21:50.577 "nvme_iov_md": false 00:21:50.577 }, 00:21:50.577 "memory_domains": [ 00:21:50.577 { 00:21:50.577 "dma_device_id": "system", 00:21:50.577 "dma_device_type": 1 00:21:50.577 }, 00:21:50.577 { 00:21:50.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.577 "dma_device_type": 2 00:21:50.577 } 00:21:50.577 ], 00:21:50.577 "driver_specific": {} 00:21:50.577 }' 00:21:50.835 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:50.835 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:50.835 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:50.835 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:50.835 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:50.835 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:50.835 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:50.836 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:50.836 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:51.094 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:51.094 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:51.094 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:51.094 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:51.094 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:51.094 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:51.352 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:51.352 "name": "BaseBdev2", 00:21:51.352 "aliases": [ 00:21:51.352 "563ea8bc-d141-4e7a-a607-ebd916b593fc" 00:21:51.352 ], 00:21:51.352 "product_name": "Malloc disk", 00:21:51.352 "block_size": 512, 00:21:51.352 "num_blocks": 65536, 00:21:51.352 "uuid": "563ea8bc-d141-4e7a-a607-ebd916b593fc", 00:21:51.352 "assigned_rate_limits": { 00:21:51.352 "rw_ios_per_sec": 0, 00:21:51.352 "rw_mbytes_per_sec": 0, 00:21:51.352 "r_mbytes_per_sec": 0, 00:21:51.352 "w_mbytes_per_sec": 0 00:21:51.352 }, 00:21:51.352 "claimed": true, 00:21:51.352 "claim_type": "exclusive_write", 00:21:51.352 "zoned": false, 00:21:51.352 "supported_io_types": { 00:21:51.352 "read": true, 00:21:51.352 "write": true, 00:21:51.352 "unmap": true, 00:21:51.352 "flush": true, 00:21:51.352 "reset": true, 00:21:51.352 "nvme_admin": false, 00:21:51.352 "nvme_io": false, 00:21:51.352 "nvme_io_md": false, 00:21:51.352 "write_zeroes": true, 00:21:51.352 "zcopy": true, 00:21:51.352 "get_zone_info": false, 00:21:51.352 "zone_management": false, 00:21:51.352 "zone_append": false, 00:21:51.352 "compare": false, 00:21:51.352 "compare_and_write": false, 00:21:51.352 "abort": true, 00:21:51.352 "seek_hole": false, 00:21:51.352 "seek_data": false, 00:21:51.352 "copy": true, 00:21:51.352 "nvme_iov_md": false 00:21:51.352 }, 00:21:51.352 "memory_domains": [ 00:21:51.352 { 00:21:51.352 "dma_device_id": "system", 00:21:51.352 "dma_device_type": 1 00:21:51.352 }, 00:21:51.352 { 00:21:51.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.352 "dma_device_type": 2 00:21:51.352 } 00:21:51.352 ], 00:21:51.352 "driver_specific": {} 00:21:51.352 }' 00:21:51.352 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:51.352 04:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:51.352 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:51.352 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:51.352 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:51.352 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:51.352 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:51.611 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:51.611 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:51.611 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:51.611 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:51.611 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:51.611 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:51.611 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:51.611 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:51.869 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:51.869 "name": "BaseBdev3", 00:21:51.869 "aliases": [ 00:21:51.869 "74533171-43ca-4629-b0dc-d74bb5da0417" 00:21:51.869 ], 00:21:51.869 "product_name": "Malloc disk", 00:21:51.869 "block_size": 512, 00:21:51.869 "num_blocks": 65536, 00:21:51.869 "uuid": "74533171-43ca-4629-b0dc-d74bb5da0417", 00:21:51.869 "assigned_rate_limits": { 00:21:51.869 "rw_ios_per_sec": 0, 00:21:51.869 "rw_mbytes_per_sec": 0, 00:21:51.869 "r_mbytes_per_sec": 0, 00:21:51.869 "w_mbytes_per_sec": 0 00:21:51.869 }, 00:21:51.869 "claimed": true, 00:21:51.869 "claim_type": "exclusive_write", 00:21:51.869 "zoned": false, 00:21:51.869 "supported_io_types": { 00:21:51.869 "read": true, 00:21:51.869 "write": true, 00:21:51.869 "unmap": true, 00:21:51.869 "flush": true, 00:21:51.869 "reset": true, 00:21:51.869 "nvme_admin": false, 00:21:51.869 "nvme_io": false, 00:21:51.869 "nvme_io_md": false, 00:21:51.869 "write_zeroes": true, 00:21:51.869 "zcopy": true, 00:21:51.869 "get_zone_info": false, 00:21:51.869 "zone_management": false, 00:21:51.869 "zone_append": false, 00:21:51.869 "compare": false, 00:21:51.869 "compare_and_write": false, 00:21:51.869 "abort": true, 00:21:51.869 "seek_hole": false, 00:21:51.869 "seek_data": false, 00:21:51.869 "copy": true, 00:21:51.869 "nvme_iov_md": false 00:21:51.869 }, 00:21:51.869 "memory_domains": [ 00:21:51.869 { 00:21:51.869 "dma_device_id": "system", 00:21:51.869 "dma_device_type": 1 00:21:51.869 }, 00:21:51.869 { 00:21:51.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.869 "dma_device_type": 2 00:21:51.869 } 00:21:51.869 ], 00:21:51.869 "driver_specific": {} 00:21:51.869 }' 00:21:51.869 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:51.869 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:51.869 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:51.869 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:51.869 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:52.128 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:52.128 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:52.128 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:52.128 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:52.128 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:52.128 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:52.128 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:52.128 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:52.128 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:52.128 04:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:52.386 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:52.386 "name": "BaseBdev4", 00:21:52.386 "aliases": [ 00:21:52.386 "c142c2e6-fba5-449f-b193-1ee83702a2e5" 00:21:52.386 ], 00:21:52.386 "product_name": "Malloc disk", 00:21:52.386 "block_size": 512, 00:21:52.386 "num_blocks": 65536, 00:21:52.386 "uuid": "c142c2e6-fba5-449f-b193-1ee83702a2e5", 00:21:52.386 "assigned_rate_limits": { 00:21:52.386 "rw_ios_per_sec": 0, 00:21:52.386 "rw_mbytes_per_sec": 0, 00:21:52.386 "r_mbytes_per_sec": 0, 00:21:52.386 "w_mbytes_per_sec": 0 00:21:52.386 }, 00:21:52.386 "claimed": true, 00:21:52.386 "claim_type": "exclusive_write", 00:21:52.386 "zoned": false, 00:21:52.386 "supported_io_types": { 00:21:52.386 "read": true, 00:21:52.386 "write": true, 00:21:52.386 "unmap": true, 00:21:52.386 "flush": true, 00:21:52.386 "reset": true, 00:21:52.386 "nvme_admin": false, 00:21:52.386 "nvme_io": false, 00:21:52.386 "nvme_io_md": false, 00:21:52.386 "write_zeroes": true, 00:21:52.386 "zcopy": true, 00:21:52.386 "get_zone_info": false, 00:21:52.386 "zone_management": false, 00:21:52.386 "zone_append": false, 00:21:52.386 "compare": false, 00:21:52.386 "compare_and_write": false, 00:21:52.386 "abort": true, 00:21:52.386 "seek_hole": false, 00:21:52.386 "seek_data": false, 00:21:52.386 "copy": true, 00:21:52.386 "nvme_iov_md": false 00:21:52.386 }, 00:21:52.386 "memory_domains": [ 00:21:52.386 { 00:21:52.386 "dma_device_id": "system", 00:21:52.386 "dma_device_type": 1 00:21:52.386 }, 00:21:52.386 { 00:21:52.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.386 "dma_device_type": 2 00:21:52.386 } 00:21:52.386 ], 00:21:52.386 "driver_specific": {} 00:21:52.386 }' 00:21:52.386 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:52.386 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:52.386 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:52.386 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:52.645 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:52.645 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:52.645 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:52.645 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:52.645 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:52.645 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:52.645 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:52.645 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:52.645 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:52.904 [2024-07-23 04:17:01.578443] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:52.904 [2024-07-23 04:17:01.578479] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:52.904 [2024-07-23 04:17:01.578538] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:52.904 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:52.904 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:21:52.904 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:52.904 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:52.904 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:21:52.904 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:21:52.904 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:52.904 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:21:52.904 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:52.904 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:52.904 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:52.904 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:52.904 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:52.904 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:52.904 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:52.904 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.904 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:53.164 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:53.164 "name": "Existed_Raid", 00:21:53.164 "uuid": "af4c9bb3-f4b4-4033-9b12-6261a0ac3c36", 00:21:53.164 "strip_size_kb": 64, 00:21:53.164 "state": "offline", 00:21:53.164 "raid_level": "raid0", 00:21:53.164 "superblock": false, 00:21:53.164 "num_base_bdevs": 4, 00:21:53.164 "num_base_bdevs_discovered": 3, 00:21:53.164 "num_base_bdevs_operational": 3, 00:21:53.164 "base_bdevs_list": [ 00:21:53.164 { 00:21:53.164 "name": null, 00:21:53.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.164 "is_configured": false, 00:21:53.164 "data_offset": 0, 00:21:53.164 "data_size": 65536 00:21:53.164 }, 00:21:53.164 { 00:21:53.164 "name": "BaseBdev2", 00:21:53.164 "uuid": "563ea8bc-d141-4e7a-a607-ebd916b593fc", 00:21:53.164 "is_configured": true, 00:21:53.164 "data_offset": 0, 00:21:53.164 "data_size": 65536 00:21:53.164 }, 00:21:53.164 { 00:21:53.164 "name": "BaseBdev3", 00:21:53.164 "uuid": "74533171-43ca-4629-b0dc-d74bb5da0417", 00:21:53.164 "is_configured": true, 00:21:53.164 "data_offset": 0, 00:21:53.164 "data_size": 65536 00:21:53.164 }, 00:21:53.164 { 00:21:53.164 "name": "BaseBdev4", 00:21:53.164 "uuid": "c142c2e6-fba5-449f-b193-1ee83702a2e5", 00:21:53.164 "is_configured": true, 00:21:53.164 "data_offset": 0, 00:21:53.164 "data_size": 65536 00:21:53.164 } 00:21:53.164 ] 00:21:53.164 }' 00:21:53.164 04:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:53.164 04:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.732 04:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:53.732 04:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:53.732 04:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.732 04:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:53.990 04:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:53.990 04:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:53.990 04:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:54.248 [2024-07-23 04:17:02.873842] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:54.248 04:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:54.248 04:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:54.248 04:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.248 04:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:54.508 04:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:54.508 04:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:54.508 04:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:54.768 [2024-07-23 04:17:03.460824] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:55.026 04:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:55.026 04:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:55.026 04:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.026 04:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:55.285 04:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:55.285 04:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:55.285 04:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:55.285 [2024-07-23 04:17:04.056013] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:55.285 [2024-07-23 04:17:04.056071] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:21:55.544 04:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:55.544 04:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:55.544 04:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.544 04:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:55.803 04:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:55.803 04:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:55.803 04:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:21:55.803 04:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:55.803 04:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:55.803 04:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:56.063 BaseBdev2 00:21:56.063 04:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:56.063 04:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:56.063 04:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:56.063 04:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:56.063 04:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:56.063 04:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:56.063 04:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:56.322 04:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:56.588 [ 00:21:56.588 { 00:21:56.588 "name": "BaseBdev2", 00:21:56.588 "aliases": [ 00:21:56.588 "45d60ec7-8200-4e7b-aa84-ff99837005ee" 00:21:56.588 ], 00:21:56.588 "product_name": "Malloc disk", 00:21:56.588 "block_size": 512, 00:21:56.588 "num_blocks": 65536, 00:21:56.588 "uuid": "45d60ec7-8200-4e7b-aa84-ff99837005ee", 00:21:56.588 "assigned_rate_limits": { 00:21:56.589 "rw_ios_per_sec": 0, 00:21:56.589 "rw_mbytes_per_sec": 0, 00:21:56.589 "r_mbytes_per_sec": 0, 00:21:56.589 "w_mbytes_per_sec": 0 00:21:56.589 }, 00:21:56.589 "claimed": false, 00:21:56.589 "zoned": false, 00:21:56.589 "supported_io_types": { 00:21:56.589 "read": true, 00:21:56.589 "write": true, 00:21:56.589 "unmap": true, 00:21:56.589 "flush": true, 00:21:56.589 "reset": true, 00:21:56.589 "nvme_admin": false, 00:21:56.589 "nvme_io": false, 00:21:56.589 "nvme_io_md": false, 00:21:56.589 "write_zeroes": true, 00:21:56.589 "zcopy": true, 00:21:56.589 "get_zone_info": false, 00:21:56.589 "zone_management": false, 00:21:56.589 "zone_append": false, 00:21:56.589 "compare": false, 00:21:56.589 "compare_and_write": false, 00:21:56.589 "abort": true, 00:21:56.589 "seek_hole": false, 00:21:56.589 "seek_data": false, 00:21:56.589 "copy": true, 00:21:56.589 "nvme_iov_md": false 00:21:56.589 }, 00:21:56.589 "memory_domains": [ 00:21:56.589 { 00:21:56.589 "dma_device_id": "system", 00:21:56.589 "dma_device_type": 1 00:21:56.589 }, 00:21:56.589 { 00:21:56.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.589 "dma_device_type": 2 00:21:56.589 } 00:21:56.589 ], 00:21:56.589 "driver_specific": {} 00:21:56.589 } 00:21:56.589 ] 00:21:56.589 04:17:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:56.589 04:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:56.589 04:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:56.589 04:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:56.856 BaseBdev3 00:21:56.856 04:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:56.856 04:17:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:56.856 04:17:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:56.856 04:17:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:56.856 04:17:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:56.856 04:17:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:56.856 04:17:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:57.115 04:17:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:57.115 [ 00:21:57.115 { 00:21:57.115 "name": "BaseBdev3", 00:21:57.115 "aliases": [ 00:21:57.115 "8aebd446-8dd6-42dd-97b3-f6c9812012d6" 00:21:57.115 ], 00:21:57.115 "product_name": "Malloc disk", 00:21:57.115 "block_size": 512, 00:21:57.115 "num_blocks": 65536, 00:21:57.115 "uuid": "8aebd446-8dd6-42dd-97b3-f6c9812012d6", 00:21:57.115 "assigned_rate_limits": { 00:21:57.115 "rw_ios_per_sec": 0, 00:21:57.115 "rw_mbytes_per_sec": 0, 00:21:57.115 "r_mbytes_per_sec": 0, 00:21:57.115 "w_mbytes_per_sec": 0 00:21:57.115 }, 00:21:57.115 "claimed": false, 00:21:57.115 "zoned": false, 00:21:57.115 "supported_io_types": { 00:21:57.115 "read": true, 00:21:57.115 "write": true, 00:21:57.115 "unmap": true, 00:21:57.115 "flush": true, 00:21:57.115 "reset": true, 00:21:57.115 "nvme_admin": false, 00:21:57.115 "nvme_io": false, 00:21:57.115 "nvme_io_md": false, 00:21:57.115 "write_zeroes": true, 00:21:57.115 "zcopy": true, 00:21:57.115 "get_zone_info": false, 00:21:57.115 "zone_management": false, 00:21:57.115 "zone_append": false, 00:21:57.115 "compare": false, 00:21:57.115 "compare_and_write": false, 00:21:57.115 "abort": true, 00:21:57.115 "seek_hole": false, 00:21:57.115 "seek_data": false, 00:21:57.115 "copy": true, 00:21:57.115 "nvme_iov_md": false 00:21:57.115 }, 00:21:57.115 "memory_domains": [ 00:21:57.115 { 00:21:57.115 "dma_device_id": "system", 00:21:57.115 "dma_device_type": 1 00:21:57.115 }, 00:21:57.115 { 00:21:57.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.115 "dma_device_type": 2 00:21:57.115 } 00:21:57.115 ], 00:21:57.115 "driver_specific": {} 00:21:57.115 } 00:21:57.115 ] 00:21:57.115 04:17:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:57.115 04:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:57.115 04:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:57.115 04:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:57.375 BaseBdev4 00:21:57.375 04:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:21:57.375 04:17:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:21:57.375 04:17:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:57.375 04:17:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:57.375 04:17:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:57.375 04:17:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:57.375 04:17:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:57.634 04:17:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:57.893 [ 00:21:57.893 { 00:21:57.893 "name": "BaseBdev4", 00:21:57.893 "aliases": [ 00:21:57.893 "f338a80e-6085-452e-bcb4-9e18ff72daec" 00:21:57.893 ], 00:21:57.893 "product_name": "Malloc disk", 00:21:57.893 "block_size": 512, 00:21:57.893 "num_blocks": 65536, 00:21:57.893 "uuid": "f338a80e-6085-452e-bcb4-9e18ff72daec", 00:21:57.893 "assigned_rate_limits": { 00:21:57.893 "rw_ios_per_sec": 0, 00:21:57.893 "rw_mbytes_per_sec": 0, 00:21:57.893 "r_mbytes_per_sec": 0, 00:21:57.893 "w_mbytes_per_sec": 0 00:21:57.893 }, 00:21:57.893 "claimed": false, 00:21:57.893 "zoned": false, 00:21:57.893 "supported_io_types": { 00:21:57.893 "read": true, 00:21:57.893 "write": true, 00:21:57.893 "unmap": true, 00:21:57.893 "flush": true, 00:21:57.893 "reset": true, 00:21:57.893 "nvme_admin": false, 00:21:57.893 "nvme_io": false, 00:21:57.893 "nvme_io_md": false, 00:21:57.893 "write_zeroes": true, 00:21:57.893 "zcopy": true, 00:21:57.893 "get_zone_info": false, 00:21:57.893 "zone_management": false, 00:21:57.893 "zone_append": false, 00:21:57.893 "compare": false, 00:21:57.893 "compare_and_write": false, 00:21:57.893 "abort": true, 00:21:57.893 "seek_hole": false, 00:21:57.893 "seek_data": false, 00:21:57.893 "copy": true, 00:21:57.893 "nvme_iov_md": false 00:21:57.893 }, 00:21:57.893 "memory_domains": [ 00:21:57.893 { 00:21:57.893 "dma_device_id": "system", 00:21:57.893 "dma_device_type": 1 00:21:57.893 }, 00:21:57.893 { 00:21:57.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.893 "dma_device_type": 2 00:21:57.893 } 00:21:57.893 ], 00:21:57.893 "driver_specific": {} 00:21:57.893 } 00:21:57.893 ] 00:21:57.893 04:17:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:57.893 04:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:57.893 04:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:57.893 04:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:58.152 [2024-07-23 04:17:06.830042] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:58.152 [2024-07-23 04:17:06.830087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:58.152 [2024-07-23 04:17:06.830119] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:58.152 [2024-07-23 04:17:06.832416] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:58.152 [2024-07-23 04:17:06.832482] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:58.153 04:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:58.153 04:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:58.153 04:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:58.153 04:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:58.153 04:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:58.153 04:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:58.153 04:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:58.153 04:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:58.153 04:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:58.153 04:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:58.153 04:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.153 04:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:58.411 04:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:58.411 "name": "Existed_Raid", 00:21:58.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.411 "strip_size_kb": 64, 00:21:58.411 "state": "configuring", 00:21:58.411 "raid_level": "raid0", 00:21:58.411 "superblock": false, 00:21:58.411 "num_base_bdevs": 4, 00:21:58.411 "num_base_bdevs_discovered": 3, 00:21:58.411 "num_base_bdevs_operational": 4, 00:21:58.411 "base_bdevs_list": [ 00:21:58.411 { 00:21:58.411 "name": "BaseBdev1", 00:21:58.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.411 "is_configured": false, 00:21:58.411 "data_offset": 0, 00:21:58.411 "data_size": 0 00:21:58.411 }, 00:21:58.411 { 00:21:58.411 "name": "BaseBdev2", 00:21:58.411 "uuid": "45d60ec7-8200-4e7b-aa84-ff99837005ee", 00:21:58.411 "is_configured": true, 00:21:58.411 "data_offset": 0, 00:21:58.411 "data_size": 65536 00:21:58.411 }, 00:21:58.411 { 00:21:58.411 "name": "BaseBdev3", 00:21:58.411 "uuid": "8aebd446-8dd6-42dd-97b3-f6c9812012d6", 00:21:58.411 "is_configured": true, 00:21:58.411 "data_offset": 0, 00:21:58.411 "data_size": 65536 00:21:58.411 }, 00:21:58.411 { 00:21:58.411 "name": "BaseBdev4", 00:21:58.411 "uuid": "f338a80e-6085-452e-bcb4-9e18ff72daec", 00:21:58.411 "is_configured": true, 00:21:58.411 "data_offset": 0, 00:21:58.411 "data_size": 65536 00:21:58.411 } 00:21:58.411 ] 00:21:58.411 }' 00:21:58.411 04:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:58.411 04:17:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.979 04:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:59.238 [2024-07-23 04:17:07.860792] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:59.238 04:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:59.238 04:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:59.238 04:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:59.238 04:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:59.238 04:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:59.238 04:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:59.238 04:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:59.238 04:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:59.238 04:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:59.238 04:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:59.238 04:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.238 04:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.497 04:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:59.497 "name": "Existed_Raid", 00:21:59.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.497 "strip_size_kb": 64, 00:21:59.497 "state": "configuring", 00:21:59.497 "raid_level": "raid0", 00:21:59.497 "superblock": false, 00:21:59.497 "num_base_bdevs": 4, 00:21:59.497 "num_base_bdevs_discovered": 2, 00:21:59.497 "num_base_bdevs_operational": 4, 00:21:59.497 "base_bdevs_list": [ 00:21:59.497 { 00:21:59.497 "name": "BaseBdev1", 00:21:59.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.497 "is_configured": false, 00:21:59.497 "data_offset": 0, 00:21:59.497 "data_size": 0 00:21:59.497 }, 00:21:59.497 { 00:21:59.497 "name": null, 00:21:59.497 "uuid": "45d60ec7-8200-4e7b-aa84-ff99837005ee", 00:21:59.497 "is_configured": false, 00:21:59.497 "data_offset": 0, 00:21:59.497 "data_size": 65536 00:21:59.497 }, 00:21:59.497 { 00:21:59.497 "name": "BaseBdev3", 00:21:59.497 "uuid": "8aebd446-8dd6-42dd-97b3-f6c9812012d6", 00:21:59.497 "is_configured": true, 00:21:59.497 "data_offset": 0, 00:21:59.497 "data_size": 65536 00:21:59.497 }, 00:21:59.497 { 00:21:59.497 "name": "BaseBdev4", 00:21:59.497 "uuid": "f338a80e-6085-452e-bcb4-9e18ff72daec", 00:21:59.497 "is_configured": true, 00:21:59.497 "data_offset": 0, 00:21:59.497 "data_size": 65536 00:21:59.497 } 00:21:59.497 ] 00:21:59.497 }' 00:21:59.497 04:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:59.497 04:17:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.066 04:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.066 04:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:00.066 04:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:00.066 04:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:00.325 [2024-07-23 04:17:09.103419] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:00.325 BaseBdev1 00:22:00.585 04:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:00.585 04:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:00.585 04:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:00.585 04:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:00.585 04:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:00.585 04:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:00.585 04:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:00.585 04:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:00.845 [ 00:22:00.845 { 00:22:00.845 "name": "BaseBdev1", 00:22:00.845 "aliases": [ 00:22:00.845 "f0a451ed-462f-4917-80aa-de7fe68f4de4" 00:22:00.845 ], 00:22:00.845 "product_name": "Malloc disk", 00:22:00.845 "block_size": 512, 00:22:00.845 "num_blocks": 65536, 00:22:00.845 "uuid": "f0a451ed-462f-4917-80aa-de7fe68f4de4", 00:22:00.845 "assigned_rate_limits": { 00:22:00.845 "rw_ios_per_sec": 0, 00:22:00.845 "rw_mbytes_per_sec": 0, 00:22:00.845 "r_mbytes_per_sec": 0, 00:22:00.845 "w_mbytes_per_sec": 0 00:22:00.845 }, 00:22:00.845 "claimed": true, 00:22:00.845 "claim_type": "exclusive_write", 00:22:00.845 "zoned": false, 00:22:00.845 "supported_io_types": { 00:22:00.845 "read": true, 00:22:00.845 "write": true, 00:22:00.845 "unmap": true, 00:22:00.845 "flush": true, 00:22:00.845 "reset": true, 00:22:00.845 "nvme_admin": false, 00:22:00.845 "nvme_io": false, 00:22:00.845 "nvme_io_md": false, 00:22:00.845 "write_zeroes": true, 00:22:00.845 "zcopy": true, 00:22:00.845 "get_zone_info": false, 00:22:00.845 "zone_management": false, 00:22:00.845 "zone_append": false, 00:22:00.845 "compare": false, 00:22:00.845 "compare_and_write": false, 00:22:00.845 "abort": true, 00:22:00.845 "seek_hole": false, 00:22:00.845 "seek_data": false, 00:22:00.845 "copy": true, 00:22:00.845 "nvme_iov_md": false 00:22:00.845 }, 00:22:00.845 "memory_domains": [ 00:22:00.845 { 00:22:00.845 "dma_device_id": "system", 00:22:00.845 "dma_device_type": 1 00:22:00.845 }, 00:22:00.845 { 00:22:00.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.845 "dma_device_type": 2 00:22:00.845 } 00:22:00.845 ], 00:22:00.845 "driver_specific": {} 00:22:00.845 } 00:22:00.845 ] 00:22:00.845 04:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:00.845 04:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:00.845 04:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:00.845 04:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:00.845 04:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:00.845 04:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:00.845 04:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:00.845 04:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:00.845 04:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:00.845 04:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:00.845 04:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:00.845 04:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.845 04:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.105 04:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:01.105 "name": "Existed_Raid", 00:22:01.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.105 "strip_size_kb": 64, 00:22:01.105 "state": "configuring", 00:22:01.105 "raid_level": "raid0", 00:22:01.105 "superblock": false, 00:22:01.105 "num_base_bdevs": 4, 00:22:01.105 "num_base_bdevs_discovered": 3, 00:22:01.105 "num_base_bdevs_operational": 4, 00:22:01.105 "base_bdevs_list": [ 00:22:01.105 { 00:22:01.105 "name": "BaseBdev1", 00:22:01.105 "uuid": "f0a451ed-462f-4917-80aa-de7fe68f4de4", 00:22:01.105 "is_configured": true, 00:22:01.105 "data_offset": 0, 00:22:01.105 "data_size": 65536 00:22:01.105 }, 00:22:01.105 { 00:22:01.105 "name": null, 00:22:01.105 "uuid": "45d60ec7-8200-4e7b-aa84-ff99837005ee", 00:22:01.105 "is_configured": false, 00:22:01.105 "data_offset": 0, 00:22:01.105 "data_size": 65536 00:22:01.105 }, 00:22:01.105 { 00:22:01.105 "name": "BaseBdev3", 00:22:01.105 "uuid": "8aebd446-8dd6-42dd-97b3-f6c9812012d6", 00:22:01.105 "is_configured": true, 00:22:01.105 "data_offset": 0, 00:22:01.105 "data_size": 65536 00:22:01.105 }, 00:22:01.105 { 00:22:01.105 "name": "BaseBdev4", 00:22:01.105 "uuid": "f338a80e-6085-452e-bcb4-9e18ff72daec", 00:22:01.105 "is_configured": true, 00:22:01.105 "data_offset": 0, 00:22:01.105 "data_size": 65536 00:22:01.105 } 00:22:01.105 ] 00:22:01.105 }' 00:22:01.105 04:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:01.105 04:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.671 04:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.671 04:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:01.930 04:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:01.930 04:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:02.190 [2024-07-23 04:17:10.792071] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:02.190 04:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:02.190 04:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:02.190 04:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:02.190 04:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:02.190 04:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:02.190 04:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:02.190 04:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:02.190 04:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:02.190 04:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:02.190 04:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:02.190 04:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.190 04:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:02.447 04:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:02.448 "name": "Existed_Raid", 00:22:02.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.448 "strip_size_kb": 64, 00:22:02.448 "state": "configuring", 00:22:02.448 "raid_level": "raid0", 00:22:02.448 "superblock": false, 00:22:02.448 "num_base_bdevs": 4, 00:22:02.448 "num_base_bdevs_discovered": 2, 00:22:02.448 "num_base_bdevs_operational": 4, 00:22:02.448 "base_bdevs_list": [ 00:22:02.448 { 00:22:02.448 "name": "BaseBdev1", 00:22:02.448 "uuid": "f0a451ed-462f-4917-80aa-de7fe68f4de4", 00:22:02.448 "is_configured": true, 00:22:02.448 "data_offset": 0, 00:22:02.448 "data_size": 65536 00:22:02.448 }, 00:22:02.448 { 00:22:02.448 "name": null, 00:22:02.448 "uuid": "45d60ec7-8200-4e7b-aa84-ff99837005ee", 00:22:02.448 "is_configured": false, 00:22:02.448 "data_offset": 0, 00:22:02.448 "data_size": 65536 00:22:02.448 }, 00:22:02.448 { 00:22:02.448 "name": null, 00:22:02.448 "uuid": "8aebd446-8dd6-42dd-97b3-f6c9812012d6", 00:22:02.448 "is_configured": false, 00:22:02.448 "data_offset": 0, 00:22:02.448 "data_size": 65536 00:22:02.448 }, 00:22:02.448 { 00:22:02.448 "name": "BaseBdev4", 00:22:02.448 "uuid": "f338a80e-6085-452e-bcb4-9e18ff72daec", 00:22:02.448 "is_configured": true, 00:22:02.448 "data_offset": 0, 00:22:02.448 "data_size": 65536 00:22:02.448 } 00:22:02.448 ] 00:22:02.448 }' 00:22:02.448 04:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:02.448 04:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.015 04:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:03.015 04:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.275 04:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:03.275 04:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:03.275 [2024-07-23 04:17:12.035401] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:03.275 04:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:03.275 04:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:03.275 04:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:03.275 04:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:03.275 04:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:03.275 04:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:03.275 04:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:03.275 04:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:03.275 04:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:03.275 04:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:03.533 04:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.533 04:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.533 04:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:03.533 "name": "Existed_Raid", 00:22:03.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.533 "strip_size_kb": 64, 00:22:03.533 "state": "configuring", 00:22:03.533 "raid_level": "raid0", 00:22:03.533 "superblock": false, 00:22:03.533 "num_base_bdevs": 4, 00:22:03.533 "num_base_bdevs_discovered": 3, 00:22:03.533 "num_base_bdevs_operational": 4, 00:22:03.533 "base_bdevs_list": [ 00:22:03.533 { 00:22:03.533 "name": "BaseBdev1", 00:22:03.533 "uuid": "f0a451ed-462f-4917-80aa-de7fe68f4de4", 00:22:03.533 "is_configured": true, 00:22:03.533 "data_offset": 0, 00:22:03.533 "data_size": 65536 00:22:03.533 }, 00:22:03.533 { 00:22:03.533 "name": null, 00:22:03.533 "uuid": "45d60ec7-8200-4e7b-aa84-ff99837005ee", 00:22:03.533 "is_configured": false, 00:22:03.533 "data_offset": 0, 00:22:03.533 "data_size": 65536 00:22:03.533 }, 00:22:03.533 { 00:22:03.533 "name": "BaseBdev3", 00:22:03.533 "uuid": "8aebd446-8dd6-42dd-97b3-f6c9812012d6", 00:22:03.533 "is_configured": true, 00:22:03.533 "data_offset": 0, 00:22:03.533 "data_size": 65536 00:22:03.533 }, 00:22:03.533 { 00:22:03.533 "name": "BaseBdev4", 00:22:03.533 "uuid": "f338a80e-6085-452e-bcb4-9e18ff72daec", 00:22:03.533 "is_configured": true, 00:22:03.533 "data_offset": 0, 00:22:03.533 "data_size": 65536 00:22:03.533 } 00:22:03.533 ] 00:22:03.533 }' 00:22:03.533 04:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:03.533 04:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.101 04:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.101 04:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:04.360 04:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:04.360 04:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:04.618 [2024-07-23 04:17:13.246709] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:04.618 04:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:04.877 04:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:04.877 04:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:04.877 04:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:04.877 04:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:04.877 04:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:04.877 04:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:04.877 04:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:04.877 04:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:04.877 04:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:04.877 04:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.877 04:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:04.877 04:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:04.877 "name": "Existed_Raid", 00:22:04.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.877 "strip_size_kb": 64, 00:22:04.877 "state": "configuring", 00:22:04.877 "raid_level": "raid0", 00:22:04.877 "superblock": false, 00:22:04.877 "num_base_bdevs": 4, 00:22:04.877 "num_base_bdevs_discovered": 2, 00:22:04.877 "num_base_bdevs_operational": 4, 00:22:04.877 "base_bdevs_list": [ 00:22:04.877 { 00:22:04.877 "name": null, 00:22:04.877 "uuid": "f0a451ed-462f-4917-80aa-de7fe68f4de4", 00:22:04.877 "is_configured": false, 00:22:04.877 "data_offset": 0, 00:22:04.877 "data_size": 65536 00:22:04.877 }, 00:22:04.877 { 00:22:04.877 "name": null, 00:22:04.877 "uuid": "45d60ec7-8200-4e7b-aa84-ff99837005ee", 00:22:04.877 "is_configured": false, 00:22:04.877 "data_offset": 0, 00:22:04.877 "data_size": 65536 00:22:04.877 }, 00:22:04.877 { 00:22:04.877 "name": "BaseBdev3", 00:22:04.877 "uuid": "8aebd446-8dd6-42dd-97b3-f6c9812012d6", 00:22:04.877 "is_configured": true, 00:22:04.877 "data_offset": 0, 00:22:04.877 "data_size": 65536 00:22:04.877 }, 00:22:04.877 { 00:22:04.877 "name": "BaseBdev4", 00:22:04.877 "uuid": "f338a80e-6085-452e-bcb4-9e18ff72daec", 00:22:04.877 "is_configured": true, 00:22:04.877 "data_offset": 0, 00:22:04.877 "data_size": 65536 00:22:04.877 } 00:22:04.877 ] 00:22:04.877 }' 00:22:04.877 04:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:04.877 04:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.445 04:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.445 04:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:05.703 04:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:05.703 04:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:05.963 [2024-07-23 04:17:14.540741] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:05.963 04:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:05.963 04:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:05.963 04:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:05.963 04:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:05.963 04:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:05.963 04:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:05.963 04:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:05.963 04:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:05.963 04:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:05.963 04:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:05.963 04:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.963 04:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:06.222 04:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:06.222 "name": "Existed_Raid", 00:22:06.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.222 "strip_size_kb": 64, 00:22:06.222 "state": "configuring", 00:22:06.222 "raid_level": "raid0", 00:22:06.222 "superblock": false, 00:22:06.222 "num_base_bdevs": 4, 00:22:06.222 "num_base_bdevs_discovered": 3, 00:22:06.222 "num_base_bdevs_operational": 4, 00:22:06.222 "base_bdevs_list": [ 00:22:06.222 { 00:22:06.223 "name": null, 00:22:06.223 "uuid": "f0a451ed-462f-4917-80aa-de7fe68f4de4", 00:22:06.223 "is_configured": false, 00:22:06.223 "data_offset": 0, 00:22:06.223 "data_size": 65536 00:22:06.223 }, 00:22:06.223 { 00:22:06.223 "name": "BaseBdev2", 00:22:06.223 "uuid": "45d60ec7-8200-4e7b-aa84-ff99837005ee", 00:22:06.223 "is_configured": true, 00:22:06.223 "data_offset": 0, 00:22:06.223 "data_size": 65536 00:22:06.223 }, 00:22:06.223 { 00:22:06.223 "name": "BaseBdev3", 00:22:06.223 "uuid": "8aebd446-8dd6-42dd-97b3-f6c9812012d6", 00:22:06.223 "is_configured": true, 00:22:06.223 "data_offset": 0, 00:22:06.223 "data_size": 65536 00:22:06.223 }, 00:22:06.223 { 00:22:06.223 "name": "BaseBdev4", 00:22:06.223 "uuid": "f338a80e-6085-452e-bcb4-9e18ff72daec", 00:22:06.223 "is_configured": true, 00:22:06.223 "data_offset": 0, 00:22:06.223 "data_size": 65536 00:22:06.223 } 00:22:06.223 ] 00:22:06.223 }' 00:22:06.223 04:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:06.223 04:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.791 04:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.791 04:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:07.049 04:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:07.049 04:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.049 04:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:07.049 04:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f0a451ed-462f-4917-80aa-de7fe68f4de4 00:22:07.308 [2024-07-23 04:17:16.032666] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:07.308 [2024-07-23 04:17:16.032714] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:22:07.308 [2024-07-23 04:17:16.032725] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:22:07.308 [2024-07-23 04:17:16.033048] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:22:07.308 [2024-07-23 04:17:16.033283] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:22:07.308 [2024-07-23 04:17:16.033302] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000042080 00:22:07.308 [2024-07-23 04:17:16.033590] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:07.308 NewBaseBdev 00:22:07.308 04:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:07.308 04:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:22:07.308 04:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:07.308 04:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:07.308 04:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:07.308 04:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:07.308 04:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:07.567 04:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:07.825 [ 00:22:07.825 { 00:22:07.825 "name": "NewBaseBdev", 00:22:07.825 "aliases": [ 00:22:07.825 "f0a451ed-462f-4917-80aa-de7fe68f4de4" 00:22:07.825 ], 00:22:07.825 "product_name": "Malloc disk", 00:22:07.825 "block_size": 512, 00:22:07.825 "num_blocks": 65536, 00:22:07.825 "uuid": "f0a451ed-462f-4917-80aa-de7fe68f4de4", 00:22:07.825 "assigned_rate_limits": { 00:22:07.825 "rw_ios_per_sec": 0, 00:22:07.825 "rw_mbytes_per_sec": 0, 00:22:07.825 "r_mbytes_per_sec": 0, 00:22:07.825 "w_mbytes_per_sec": 0 00:22:07.825 }, 00:22:07.825 "claimed": true, 00:22:07.825 "claim_type": "exclusive_write", 00:22:07.825 "zoned": false, 00:22:07.825 "supported_io_types": { 00:22:07.825 "read": true, 00:22:07.825 "write": true, 00:22:07.825 "unmap": true, 00:22:07.825 "flush": true, 00:22:07.825 "reset": true, 00:22:07.825 "nvme_admin": false, 00:22:07.825 "nvme_io": false, 00:22:07.825 "nvme_io_md": false, 00:22:07.825 "write_zeroes": true, 00:22:07.825 "zcopy": true, 00:22:07.825 "get_zone_info": false, 00:22:07.825 "zone_management": false, 00:22:07.825 "zone_append": false, 00:22:07.825 "compare": false, 00:22:07.826 "compare_and_write": false, 00:22:07.826 "abort": true, 00:22:07.826 "seek_hole": false, 00:22:07.826 "seek_data": false, 00:22:07.826 "copy": true, 00:22:07.826 "nvme_iov_md": false 00:22:07.826 }, 00:22:07.826 "memory_domains": [ 00:22:07.826 { 00:22:07.826 "dma_device_id": "system", 00:22:07.826 "dma_device_type": 1 00:22:07.826 }, 00:22:07.826 { 00:22:07.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:07.826 "dma_device_type": 2 00:22:07.826 } 00:22:07.826 ], 00:22:07.826 "driver_specific": {} 00:22:07.826 } 00:22:07.826 ] 00:22:07.826 04:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:07.826 04:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:07.826 04:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:07.826 04:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:07.826 04:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:07.826 04:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:07.826 04:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:07.826 04:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:07.826 04:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:07.826 04:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:07.826 04:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:07.826 04:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.826 04:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.091 04:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:08.091 "name": "Existed_Raid", 00:22:08.091 "uuid": "2a1644a6-5bd6-4b81-b9ae-a6104bd4c101", 00:22:08.091 "strip_size_kb": 64, 00:22:08.091 "state": "online", 00:22:08.091 "raid_level": "raid0", 00:22:08.091 "superblock": false, 00:22:08.091 "num_base_bdevs": 4, 00:22:08.091 "num_base_bdevs_discovered": 4, 00:22:08.091 "num_base_bdevs_operational": 4, 00:22:08.091 "base_bdevs_list": [ 00:22:08.091 { 00:22:08.091 "name": "NewBaseBdev", 00:22:08.091 "uuid": "f0a451ed-462f-4917-80aa-de7fe68f4de4", 00:22:08.091 "is_configured": true, 00:22:08.091 "data_offset": 0, 00:22:08.091 "data_size": 65536 00:22:08.091 }, 00:22:08.091 { 00:22:08.091 "name": "BaseBdev2", 00:22:08.091 "uuid": "45d60ec7-8200-4e7b-aa84-ff99837005ee", 00:22:08.091 "is_configured": true, 00:22:08.091 "data_offset": 0, 00:22:08.091 "data_size": 65536 00:22:08.091 }, 00:22:08.091 { 00:22:08.091 "name": "BaseBdev3", 00:22:08.091 "uuid": "8aebd446-8dd6-42dd-97b3-f6c9812012d6", 00:22:08.091 "is_configured": true, 00:22:08.091 "data_offset": 0, 00:22:08.091 "data_size": 65536 00:22:08.091 }, 00:22:08.091 { 00:22:08.091 "name": "BaseBdev4", 00:22:08.091 "uuid": "f338a80e-6085-452e-bcb4-9e18ff72daec", 00:22:08.091 "is_configured": true, 00:22:08.091 "data_offset": 0, 00:22:08.091 "data_size": 65536 00:22:08.091 } 00:22:08.091 ] 00:22:08.091 }' 00:22:08.091 04:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:08.091 04:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.673 04:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:08.673 04:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:08.673 04:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:08.673 04:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:08.673 04:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:08.673 04:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:08.673 04:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:08.673 04:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:08.931 [2024-07-23 04:17:17.501116] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:08.931 04:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:08.931 "name": "Existed_Raid", 00:22:08.931 "aliases": [ 00:22:08.931 "2a1644a6-5bd6-4b81-b9ae-a6104bd4c101" 00:22:08.931 ], 00:22:08.931 "product_name": "Raid Volume", 00:22:08.931 "block_size": 512, 00:22:08.931 "num_blocks": 262144, 00:22:08.931 "uuid": "2a1644a6-5bd6-4b81-b9ae-a6104bd4c101", 00:22:08.931 "assigned_rate_limits": { 00:22:08.931 "rw_ios_per_sec": 0, 00:22:08.931 "rw_mbytes_per_sec": 0, 00:22:08.931 "r_mbytes_per_sec": 0, 00:22:08.931 "w_mbytes_per_sec": 0 00:22:08.931 }, 00:22:08.932 "claimed": false, 00:22:08.932 "zoned": false, 00:22:08.932 "supported_io_types": { 00:22:08.932 "read": true, 00:22:08.932 "write": true, 00:22:08.932 "unmap": true, 00:22:08.932 "flush": true, 00:22:08.932 "reset": true, 00:22:08.932 "nvme_admin": false, 00:22:08.932 "nvme_io": false, 00:22:08.932 "nvme_io_md": false, 00:22:08.932 "write_zeroes": true, 00:22:08.932 "zcopy": false, 00:22:08.932 "get_zone_info": false, 00:22:08.932 "zone_management": false, 00:22:08.932 "zone_append": false, 00:22:08.932 "compare": false, 00:22:08.932 "compare_and_write": false, 00:22:08.932 "abort": false, 00:22:08.932 "seek_hole": false, 00:22:08.932 "seek_data": false, 00:22:08.932 "copy": false, 00:22:08.932 "nvme_iov_md": false 00:22:08.932 }, 00:22:08.932 "memory_domains": [ 00:22:08.932 { 00:22:08.932 "dma_device_id": "system", 00:22:08.932 "dma_device_type": 1 00:22:08.932 }, 00:22:08.932 { 00:22:08.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.932 "dma_device_type": 2 00:22:08.932 }, 00:22:08.932 { 00:22:08.932 "dma_device_id": "system", 00:22:08.932 "dma_device_type": 1 00:22:08.932 }, 00:22:08.932 { 00:22:08.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.932 "dma_device_type": 2 00:22:08.932 }, 00:22:08.932 { 00:22:08.932 "dma_device_id": "system", 00:22:08.932 "dma_device_type": 1 00:22:08.932 }, 00:22:08.932 { 00:22:08.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.932 "dma_device_type": 2 00:22:08.932 }, 00:22:08.932 { 00:22:08.932 "dma_device_id": "system", 00:22:08.932 "dma_device_type": 1 00:22:08.932 }, 00:22:08.932 { 00:22:08.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.932 "dma_device_type": 2 00:22:08.932 } 00:22:08.932 ], 00:22:08.932 "driver_specific": { 00:22:08.932 "raid": { 00:22:08.932 "uuid": "2a1644a6-5bd6-4b81-b9ae-a6104bd4c101", 00:22:08.932 "strip_size_kb": 64, 00:22:08.932 "state": "online", 00:22:08.932 "raid_level": "raid0", 00:22:08.932 "superblock": false, 00:22:08.932 "num_base_bdevs": 4, 00:22:08.932 "num_base_bdevs_discovered": 4, 00:22:08.932 "num_base_bdevs_operational": 4, 00:22:08.932 "base_bdevs_list": [ 00:22:08.932 { 00:22:08.932 "name": "NewBaseBdev", 00:22:08.932 "uuid": "f0a451ed-462f-4917-80aa-de7fe68f4de4", 00:22:08.932 "is_configured": true, 00:22:08.932 "data_offset": 0, 00:22:08.932 "data_size": 65536 00:22:08.932 }, 00:22:08.932 { 00:22:08.932 "name": "BaseBdev2", 00:22:08.932 "uuid": "45d60ec7-8200-4e7b-aa84-ff99837005ee", 00:22:08.932 "is_configured": true, 00:22:08.932 "data_offset": 0, 00:22:08.932 "data_size": 65536 00:22:08.932 }, 00:22:08.932 { 00:22:08.932 "name": "BaseBdev3", 00:22:08.932 "uuid": "8aebd446-8dd6-42dd-97b3-f6c9812012d6", 00:22:08.932 "is_configured": true, 00:22:08.932 "data_offset": 0, 00:22:08.932 "data_size": 65536 00:22:08.932 }, 00:22:08.932 { 00:22:08.932 "name": "BaseBdev4", 00:22:08.932 "uuid": "f338a80e-6085-452e-bcb4-9e18ff72daec", 00:22:08.932 "is_configured": true, 00:22:08.932 "data_offset": 0, 00:22:08.932 "data_size": 65536 00:22:08.932 } 00:22:08.932 ] 00:22:08.932 } 00:22:08.932 } 00:22:08.932 }' 00:22:08.932 04:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:08.932 04:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:08.932 BaseBdev2 00:22:08.932 BaseBdev3 00:22:08.932 BaseBdev4' 00:22:08.932 04:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:08.932 04:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:08.932 04:17:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:09.499 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:09.499 "name": "NewBaseBdev", 00:22:09.499 "aliases": [ 00:22:09.499 "f0a451ed-462f-4917-80aa-de7fe68f4de4" 00:22:09.499 ], 00:22:09.499 "product_name": "Malloc disk", 00:22:09.499 "block_size": 512, 00:22:09.499 "num_blocks": 65536, 00:22:09.499 "uuid": "f0a451ed-462f-4917-80aa-de7fe68f4de4", 00:22:09.499 "assigned_rate_limits": { 00:22:09.499 "rw_ios_per_sec": 0, 00:22:09.499 "rw_mbytes_per_sec": 0, 00:22:09.499 "r_mbytes_per_sec": 0, 00:22:09.499 "w_mbytes_per_sec": 0 00:22:09.499 }, 00:22:09.499 "claimed": true, 00:22:09.499 "claim_type": "exclusive_write", 00:22:09.499 "zoned": false, 00:22:09.499 "supported_io_types": { 00:22:09.499 "read": true, 00:22:09.499 "write": true, 00:22:09.499 "unmap": true, 00:22:09.499 "flush": true, 00:22:09.499 "reset": true, 00:22:09.499 "nvme_admin": false, 00:22:09.499 "nvme_io": false, 00:22:09.499 "nvme_io_md": false, 00:22:09.499 "write_zeroes": true, 00:22:09.499 "zcopy": true, 00:22:09.499 "get_zone_info": false, 00:22:09.499 "zone_management": false, 00:22:09.499 "zone_append": false, 00:22:09.499 "compare": false, 00:22:09.499 "compare_and_write": false, 00:22:09.499 "abort": true, 00:22:09.499 "seek_hole": false, 00:22:09.499 "seek_data": false, 00:22:09.499 "copy": true, 00:22:09.499 "nvme_iov_md": false 00:22:09.499 }, 00:22:09.499 "memory_domains": [ 00:22:09.499 { 00:22:09.499 "dma_device_id": "system", 00:22:09.499 "dma_device_type": 1 00:22:09.499 }, 00:22:09.499 { 00:22:09.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.499 "dma_device_type": 2 00:22:09.499 } 00:22:09.499 ], 00:22:09.499 "driver_specific": {} 00:22:09.499 }' 00:22:09.499 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:09.499 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:09.499 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:09.499 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:09.499 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:09.499 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:09.499 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:09.499 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:09.499 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:09.499 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:09.758 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:09.758 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:09.758 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:09.758 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:09.758 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:10.016 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:10.016 "name": "BaseBdev2", 00:22:10.016 "aliases": [ 00:22:10.016 "45d60ec7-8200-4e7b-aa84-ff99837005ee" 00:22:10.016 ], 00:22:10.016 "product_name": "Malloc disk", 00:22:10.016 "block_size": 512, 00:22:10.016 "num_blocks": 65536, 00:22:10.016 "uuid": "45d60ec7-8200-4e7b-aa84-ff99837005ee", 00:22:10.016 "assigned_rate_limits": { 00:22:10.016 "rw_ios_per_sec": 0, 00:22:10.016 "rw_mbytes_per_sec": 0, 00:22:10.016 "r_mbytes_per_sec": 0, 00:22:10.016 "w_mbytes_per_sec": 0 00:22:10.016 }, 00:22:10.016 "claimed": true, 00:22:10.016 "claim_type": "exclusive_write", 00:22:10.016 "zoned": false, 00:22:10.016 "supported_io_types": { 00:22:10.016 "read": true, 00:22:10.016 "write": true, 00:22:10.016 "unmap": true, 00:22:10.016 "flush": true, 00:22:10.016 "reset": true, 00:22:10.016 "nvme_admin": false, 00:22:10.016 "nvme_io": false, 00:22:10.016 "nvme_io_md": false, 00:22:10.016 "write_zeroes": true, 00:22:10.016 "zcopy": true, 00:22:10.016 "get_zone_info": false, 00:22:10.016 "zone_management": false, 00:22:10.016 "zone_append": false, 00:22:10.016 "compare": false, 00:22:10.016 "compare_and_write": false, 00:22:10.016 "abort": true, 00:22:10.016 "seek_hole": false, 00:22:10.016 "seek_data": false, 00:22:10.016 "copy": true, 00:22:10.016 "nvme_iov_md": false 00:22:10.016 }, 00:22:10.016 "memory_domains": [ 00:22:10.016 { 00:22:10.016 "dma_device_id": "system", 00:22:10.016 "dma_device_type": 1 00:22:10.016 }, 00:22:10.016 { 00:22:10.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.016 "dma_device_type": 2 00:22:10.016 } 00:22:10.016 ], 00:22:10.016 "driver_specific": {} 00:22:10.016 }' 00:22:10.016 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:10.016 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:10.016 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:10.016 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:10.016 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:10.016 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:10.016 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:10.274 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:10.274 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:10.274 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:10.274 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:10.274 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:10.274 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:10.274 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:10.274 04:17:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:10.533 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:10.533 "name": "BaseBdev3", 00:22:10.533 "aliases": [ 00:22:10.533 "8aebd446-8dd6-42dd-97b3-f6c9812012d6" 00:22:10.533 ], 00:22:10.533 "product_name": "Malloc disk", 00:22:10.533 "block_size": 512, 00:22:10.533 "num_blocks": 65536, 00:22:10.533 "uuid": "8aebd446-8dd6-42dd-97b3-f6c9812012d6", 00:22:10.533 "assigned_rate_limits": { 00:22:10.533 "rw_ios_per_sec": 0, 00:22:10.533 "rw_mbytes_per_sec": 0, 00:22:10.533 "r_mbytes_per_sec": 0, 00:22:10.533 "w_mbytes_per_sec": 0 00:22:10.533 }, 00:22:10.533 "claimed": true, 00:22:10.533 "claim_type": "exclusive_write", 00:22:10.533 "zoned": false, 00:22:10.533 "supported_io_types": { 00:22:10.533 "read": true, 00:22:10.533 "write": true, 00:22:10.533 "unmap": true, 00:22:10.533 "flush": true, 00:22:10.533 "reset": true, 00:22:10.533 "nvme_admin": false, 00:22:10.533 "nvme_io": false, 00:22:10.533 "nvme_io_md": false, 00:22:10.533 "write_zeroes": true, 00:22:10.533 "zcopy": true, 00:22:10.533 "get_zone_info": false, 00:22:10.533 "zone_management": false, 00:22:10.533 "zone_append": false, 00:22:10.533 "compare": false, 00:22:10.533 "compare_and_write": false, 00:22:10.533 "abort": true, 00:22:10.533 "seek_hole": false, 00:22:10.533 "seek_data": false, 00:22:10.533 "copy": true, 00:22:10.533 "nvme_iov_md": false 00:22:10.533 }, 00:22:10.533 "memory_domains": [ 00:22:10.533 { 00:22:10.533 "dma_device_id": "system", 00:22:10.533 "dma_device_type": 1 00:22:10.533 }, 00:22:10.533 { 00:22:10.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.533 "dma_device_type": 2 00:22:10.533 } 00:22:10.533 ], 00:22:10.533 "driver_specific": {} 00:22:10.533 }' 00:22:10.533 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:10.533 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:10.533 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:10.533 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:10.533 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:10.791 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:10.791 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:10.791 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:10.791 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:10.791 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:10.791 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:10.791 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:10.791 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:10.791 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:10.791 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:11.049 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:11.049 "name": "BaseBdev4", 00:22:11.049 "aliases": [ 00:22:11.049 "f338a80e-6085-452e-bcb4-9e18ff72daec" 00:22:11.049 ], 00:22:11.049 "product_name": "Malloc disk", 00:22:11.049 "block_size": 512, 00:22:11.049 "num_blocks": 65536, 00:22:11.049 "uuid": "f338a80e-6085-452e-bcb4-9e18ff72daec", 00:22:11.049 "assigned_rate_limits": { 00:22:11.049 "rw_ios_per_sec": 0, 00:22:11.049 "rw_mbytes_per_sec": 0, 00:22:11.049 "r_mbytes_per_sec": 0, 00:22:11.049 "w_mbytes_per_sec": 0 00:22:11.049 }, 00:22:11.049 "claimed": true, 00:22:11.049 "claim_type": "exclusive_write", 00:22:11.049 "zoned": false, 00:22:11.049 "supported_io_types": { 00:22:11.049 "read": true, 00:22:11.049 "write": true, 00:22:11.049 "unmap": true, 00:22:11.049 "flush": true, 00:22:11.049 "reset": true, 00:22:11.049 "nvme_admin": false, 00:22:11.049 "nvme_io": false, 00:22:11.049 "nvme_io_md": false, 00:22:11.049 "write_zeroes": true, 00:22:11.049 "zcopy": true, 00:22:11.049 "get_zone_info": false, 00:22:11.049 "zone_management": false, 00:22:11.049 "zone_append": false, 00:22:11.049 "compare": false, 00:22:11.049 "compare_and_write": false, 00:22:11.049 "abort": true, 00:22:11.049 "seek_hole": false, 00:22:11.049 "seek_data": false, 00:22:11.049 "copy": true, 00:22:11.049 "nvme_iov_md": false 00:22:11.049 }, 00:22:11.049 "memory_domains": [ 00:22:11.049 { 00:22:11.049 "dma_device_id": "system", 00:22:11.049 "dma_device_type": 1 00:22:11.049 }, 00:22:11.049 { 00:22:11.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.049 "dma_device_type": 2 00:22:11.049 } 00:22:11.049 ], 00:22:11.049 "driver_specific": {} 00:22:11.049 }' 00:22:11.049 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:11.049 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:11.049 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:11.049 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:11.049 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:11.049 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:11.049 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:11.307 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:11.307 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:11.307 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:11.308 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:11.308 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:11.308 04:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:11.567 [2024-07-23 04:17:20.175998] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:11.567 [2024-07-23 04:17:20.176036] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:11.567 [2024-07-23 04:17:20.176122] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:11.567 [2024-07-23 04:17:20.176235] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:11.567 [2024-07-23 04:17:20.176253] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name Existed_Raid, state offline 00:22:11.567 04:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2698532 00:22:11.567 04:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2698532 ']' 00:22:11.567 04:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2698532 00:22:11.567 04:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:22:11.567 04:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:11.567 04:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2698532 00:22:11.567 04:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:11.567 04:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:11.567 04:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2698532' 00:22:11.567 killing process with pid 2698532 00:22:11.567 04:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2698532 00:22:11.567 [2024-07-23 04:17:20.248925] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:11.567 04:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2698532 00:22:12.134 [2024-07-23 04:17:20.707038] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:22:14.042 00:22:14.042 real 0m33.320s 00:22:14.042 user 0m58.464s 00:22:14.042 sys 0m5.584s 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.042 ************************************ 00:22:14.042 END TEST raid_state_function_test 00:22:14.042 ************************************ 00:22:14.042 04:17:22 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:14.042 04:17:22 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:22:14.042 04:17:22 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:14.042 04:17:22 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:14.042 04:17:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:14.042 ************************************ 00:22:14.042 START TEST raid_state_function_test_sb 00:22:14.042 ************************************ 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 true 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2704747 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2704747' 00:22:14.042 Process raid pid: 2704747 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2704747 /var/tmp/spdk-raid.sock 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2704747 ']' 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:14.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:14.042 04:17:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.042 [2024-07-23 04:17:22.571179] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:14.042 [2024-07-23 04:17:22.571292] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.042 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:14.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.042 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:14.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.042 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:14.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.042 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:14.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.042 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:14.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.042 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:14.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.042 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:14.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.042 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:14.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.042 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:14.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.042 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:14.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.042 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:14.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.042 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:14.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.042 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:14.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.042 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:14.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.042 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:14.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.042 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:14.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.042 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:14.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.042 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:14.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.043 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:14.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.043 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:14.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.043 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:14.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.043 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:14.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.043 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:14.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.043 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:14.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.043 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:14.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.043 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:14.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.043 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:14.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.043 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:14.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.043 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:14.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.043 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:14.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.043 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:14.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:14.043 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:14.043 [2024-07-23 04:17:22.795432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.614 [2024-07-23 04:17:23.089410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.874 [2024-07-23 04:17:23.437507] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:14.874 [2024-07-23 04:17:23.437542] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:14.874 04:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:14.874 04:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:22:14.874 04:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:15.133 [2024-07-23 04:17:23.827501] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:15.133 [2024-07-23 04:17:23.827556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:15.133 [2024-07-23 04:17:23.827571] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:15.133 [2024-07-23 04:17:23.827587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:15.133 [2024-07-23 04:17:23.827599] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:15.133 [2024-07-23 04:17:23.827614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:15.133 [2024-07-23 04:17:23.827626] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:15.133 [2024-07-23 04:17:23.827644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:15.133 04:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:15.133 04:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:15.133 04:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:15.133 04:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:15.133 04:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:15.133 04:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:15.133 04:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:15.133 04:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:15.133 04:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:15.133 04:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:15.133 04:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.133 04:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:15.392 04:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:15.392 "name": "Existed_Raid", 00:22:15.392 "uuid": "67a65e23-4d1b-498c-b3c8-a4cc3db03572", 00:22:15.392 "strip_size_kb": 64, 00:22:15.392 "state": "configuring", 00:22:15.392 "raid_level": "raid0", 00:22:15.392 "superblock": true, 00:22:15.392 "num_base_bdevs": 4, 00:22:15.392 "num_base_bdevs_discovered": 0, 00:22:15.392 "num_base_bdevs_operational": 4, 00:22:15.392 "base_bdevs_list": [ 00:22:15.392 { 00:22:15.392 "name": "BaseBdev1", 00:22:15.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.392 "is_configured": false, 00:22:15.392 "data_offset": 0, 00:22:15.392 "data_size": 0 00:22:15.392 }, 00:22:15.392 { 00:22:15.392 "name": "BaseBdev2", 00:22:15.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.392 "is_configured": false, 00:22:15.392 "data_offset": 0, 00:22:15.392 "data_size": 0 00:22:15.392 }, 00:22:15.392 { 00:22:15.392 "name": "BaseBdev3", 00:22:15.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.392 "is_configured": false, 00:22:15.392 "data_offset": 0, 00:22:15.392 "data_size": 0 00:22:15.392 }, 00:22:15.392 { 00:22:15.392 "name": "BaseBdev4", 00:22:15.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.392 "is_configured": false, 00:22:15.392 "data_offset": 0, 00:22:15.392 "data_size": 0 00:22:15.392 } 00:22:15.392 ] 00:22:15.392 }' 00:22:15.392 04:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:15.392 04:17:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:16.330 04:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:16.330 [2024-07-23 04:17:25.082678] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:16.330 [2024-07-23 04:17:25.082719] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:22:16.330 04:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:16.589 [2024-07-23 04:17:25.295329] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:16.589 [2024-07-23 04:17:25.295377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:16.589 [2024-07-23 04:17:25.295391] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:16.589 [2024-07-23 04:17:25.295415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:16.589 [2024-07-23 04:17:25.295426] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:16.589 [2024-07-23 04:17:25.295442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:16.589 [2024-07-23 04:17:25.295453] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:16.589 [2024-07-23 04:17:25.295468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:16.589 04:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:16.848 [2024-07-23 04:17:25.509365] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:16.848 BaseBdev1 00:22:16.848 04:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:16.848 04:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:16.849 04:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:16.849 04:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:16.849 04:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:16.849 04:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:16.849 04:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:17.108 04:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:17.367 [ 00:22:17.367 { 00:22:17.367 "name": "BaseBdev1", 00:22:17.367 "aliases": [ 00:22:17.367 "d9df8d9e-eefe-41d6-bc66-a7e5ccbb1d8e" 00:22:17.367 ], 00:22:17.367 "product_name": "Malloc disk", 00:22:17.367 "block_size": 512, 00:22:17.367 "num_blocks": 65536, 00:22:17.367 "uuid": "d9df8d9e-eefe-41d6-bc66-a7e5ccbb1d8e", 00:22:17.367 "assigned_rate_limits": { 00:22:17.367 "rw_ios_per_sec": 0, 00:22:17.367 "rw_mbytes_per_sec": 0, 00:22:17.367 "r_mbytes_per_sec": 0, 00:22:17.367 "w_mbytes_per_sec": 0 00:22:17.367 }, 00:22:17.367 "claimed": true, 00:22:17.367 "claim_type": "exclusive_write", 00:22:17.367 "zoned": false, 00:22:17.367 "supported_io_types": { 00:22:17.367 "read": true, 00:22:17.367 "write": true, 00:22:17.367 "unmap": true, 00:22:17.367 "flush": true, 00:22:17.367 "reset": true, 00:22:17.367 "nvme_admin": false, 00:22:17.367 "nvme_io": false, 00:22:17.367 "nvme_io_md": false, 00:22:17.367 "write_zeroes": true, 00:22:17.367 "zcopy": true, 00:22:17.367 "get_zone_info": false, 00:22:17.367 "zone_management": false, 00:22:17.367 "zone_append": false, 00:22:17.367 "compare": false, 00:22:17.367 "compare_and_write": false, 00:22:17.367 "abort": true, 00:22:17.367 "seek_hole": false, 00:22:17.367 "seek_data": false, 00:22:17.367 "copy": true, 00:22:17.367 "nvme_iov_md": false 00:22:17.367 }, 00:22:17.367 "memory_domains": [ 00:22:17.367 { 00:22:17.367 "dma_device_id": "system", 00:22:17.367 "dma_device_type": 1 00:22:17.367 }, 00:22:17.368 { 00:22:17.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.368 "dma_device_type": 2 00:22:17.368 } 00:22:17.368 ], 00:22:17.368 "driver_specific": {} 00:22:17.368 } 00:22:17.368 ] 00:22:17.368 04:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:17.368 04:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:17.368 04:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:17.368 04:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:17.368 04:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:17.368 04:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:17.368 04:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:17.368 04:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:17.368 04:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:17.368 04:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:17.368 04:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:17.368 04:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.368 04:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:17.627 04:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:17.627 "name": "Existed_Raid", 00:22:17.627 "uuid": "62f0ad2b-3c13-43e6-a224-e25cab731578", 00:22:17.627 "strip_size_kb": 64, 00:22:17.627 "state": "configuring", 00:22:17.627 "raid_level": "raid0", 00:22:17.627 "superblock": true, 00:22:17.627 "num_base_bdevs": 4, 00:22:17.627 "num_base_bdevs_discovered": 1, 00:22:17.627 "num_base_bdevs_operational": 4, 00:22:17.627 "base_bdevs_list": [ 00:22:17.627 { 00:22:17.627 "name": "BaseBdev1", 00:22:17.627 "uuid": "d9df8d9e-eefe-41d6-bc66-a7e5ccbb1d8e", 00:22:17.627 "is_configured": true, 00:22:17.627 "data_offset": 2048, 00:22:17.627 "data_size": 63488 00:22:17.627 }, 00:22:17.627 { 00:22:17.627 "name": "BaseBdev2", 00:22:17.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.627 "is_configured": false, 00:22:17.627 "data_offset": 0, 00:22:17.627 "data_size": 0 00:22:17.627 }, 00:22:17.627 { 00:22:17.627 "name": "BaseBdev3", 00:22:17.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.627 "is_configured": false, 00:22:17.627 "data_offset": 0, 00:22:17.627 "data_size": 0 00:22:17.627 }, 00:22:17.627 { 00:22:17.627 "name": "BaseBdev4", 00:22:17.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.627 "is_configured": false, 00:22:17.627 "data_offset": 0, 00:22:17.627 "data_size": 0 00:22:17.627 } 00:22:17.627 ] 00:22:17.627 }' 00:22:17.627 04:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:17.627 04:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.195 04:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:18.195 [2024-07-23 04:17:26.969346] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:18.195 [2024-07-23 04:17:26.969398] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:22:18.454 04:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:18.454 [2024-07-23 04:17:27.198065] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:18.454 [2024-07-23 04:17:27.200403] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:18.454 [2024-07-23 04:17:27.200445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:18.454 [2024-07-23 04:17:27.200459] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:18.454 [2024-07-23 04:17:27.200476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:18.454 [2024-07-23 04:17:27.200488] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:18.454 [2024-07-23 04:17:27.200506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:18.454 04:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:18.454 04:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:18.454 04:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:18.454 04:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:18.454 04:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:18.454 04:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:18.454 04:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:18.454 04:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:18.454 04:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:18.454 04:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:18.454 04:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:18.454 04:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:18.454 04:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.454 04:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:18.714 04:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:18.714 "name": "Existed_Raid", 00:22:18.714 "uuid": "1ce2f745-5d02-4ee1-a2b0-59d1c6dfe2a4", 00:22:18.714 "strip_size_kb": 64, 00:22:18.714 "state": "configuring", 00:22:18.714 "raid_level": "raid0", 00:22:18.714 "superblock": true, 00:22:18.714 "num_base_bdevs": 4, 00:22:18.714 "num_base_bdevs_discovered": 1, 00:22:18.714 "num_base_bdevs_operational": 4, 00:22:18.714 "base_bdevs_list": [ 00:22:18.714 { 00:22:18.714 "name": "BaseBdev1", 00:22:18.714 "uuid": "d9df8d9e-eefe-41d6-bc66-a7e5ccbb1d8e", 00:22:18.714 "is_configured": true, 00:22:18.714 "data_offset": 2048, 00:22:18.714 "data_size": 63488 00:22:18.714 }, 00:22:18.714 { 00:22:18.714 "name": "BaseBdev2", 00:22:18.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.714 "is_configured": false, 00:22:18.714 "data_offset": 0, 00:22:18.714 "data_size": 0 00:22:18.714 }, 00:22:18.714 { 00:22:18.714 "name": "BaseBdev3", 00:22:18.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.714 "is_configured": false, 00:22:18.714 "data_offset": 0, 00:22:18.714 "data_size": 0 00:22:18.714 }, 00:22:18.714 { 00:22:18.714 "name": "BaseBdev4", 00:22:18.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.714 "is_configured": false, 00:22:18.714 "data_offset": 0, 00:22:18.714 "data_size": 0 00:22:18.714 } 00:22:18.714 ] 00:22:18.714 }' 00:22:18.714 04:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:18.714 04:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.651 04:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:19.910 [2024-07-23 04:17:28.556056] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:19.910 BaseBdev2 00:22:19.910 04:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:19.910 04:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:19.910 04:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:19.910 04:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:19.910 04:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:19.910 04:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:19.910 04:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:20.169 04:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:20.428 [ 00:22:20.428 { 00:22:20.428 "name": "BaseBdev2", 00:22:20.428 "aliases": [ 00:22:20.428 "ee15302c-55f4-43ad-b520-8a88fc57114b" 00:22:20.428 ], 00:22:20.428 "product_name": "Malloc disk", 00:22:20.428 "block_size": 512, 00:22:20.428 "num_blocks": 65536, 00:22:20.428 "uuid": "ee15302c-55f4-43ad-b520-8a88fc57114b", 00:22:20.428 "assigned_rate_limits": { 00:22:20.428 "rw_ios_per_sec": 0, 00:22:20.428 "rw_mbytes_per_sec": 0, 00:22:20.428 "r_mbytes_per_sec": 0, 00:22:20.428 "w_mbytes_per_sec": 0 00:22:20.428 }, 00:22:20.428 "claimed": true, 00:22:20.428 "claim_type": "exclusive_write", 00:22:20.428 "zoned": false, 00:22:20.428 "supported_io_types": { 00:22:20.428 "read": true, 00:22:20.428 "write": true, 00:22:20.428 "unmap": true, 00:22:20.428 "flush": true, 00:22:20.428 "reset": true, 00:22:20.428 "nvme_admin": false, 00:22:20.428 "nvme_io": false, 00:22:20.428 "nvme_io_md": false, 00:22:20.428 "write_zeroes": true, 00:22:20.428 "zcopy": true, 00:22:20.428 "get_zone_info": false, 00:22:20.428 "zone_management": false, 00:22:20.428 "zone_append": false, 00:22:20.428 "compare": false, 00:22:20.428 "compare_and_write": false, 00:22:20.428 "abort": true, 00:22:20.428 "seek_hole": false, 00:22:20.428 "seek_data": false, 00:22:20.428 "copy": true, 00:22:20.428 "nvme_iov_md": false 00:22:20.428 }, 00:22:20.428 "memory_domains": [ 00:22:20.428 { 00:22:20.428 "dma_device_id": "system", 00:22:20.428 "dma_device_type": 1 00:22:20.428 }, 00:22:20.428 { 00:22:20.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.428 "dma_device_type": 2 00:22:20.428 } 00:22:20.428 ], 00:22:20.428 "driver_specific": {} 00:22:20.428 } 00:22:20.428 ] 00:22:20.428 04:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:20.428 04:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:20.428 04:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:20.428 04:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:20.428 04:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:20.428 04:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:20.428 04:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:20.428 04:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:20.428 04:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:20.428 04:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:20.428 04:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:20.428 04:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:20.428 04:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:20.428 04:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.428 04:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:20.700 04:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:20.700 "name": "Existed_Raid", 00:22:20.700 "uuid": "1ce2f745-5d02-4ee1-a2b0-59d1c6dfe2a4", 00:22:20.700 "strip_size_kb": 64, 00:22:20.700 "state": "configuring", 00:22:20.700 "raid_level": "raid0", 00:22:20.700 "superblock": true, 00:22:20.700 "num_base_bdevs": 4, 00:22:20.700 "num_base_bdevs_discovered": 2, 00:22:20.700 "num_base_bdevs_operational": 4, 00:22:20.700 "base_bdevs_list": [ 00:22:20.700 { 00:22:20.700 "name": "BaseBdev1", 00:22:20.700 "uuid": "d9df8d9e-eefe-41d6-bc66-a7e5ccbb1d8e", 00:22:20.700 "is_configured": true, 00:22:20.700 "data_offset": 2048, 00:22:20.700 "data_size": 63488 00:22:20.700 }, 00:22:20.700 { 00:22:20.700 "name": "BaseBdev2", 00:22:20.700 "uuid": "ee15302c-55f4-43ad-b520-8a88fc57114b", 00:22:20.700 "is_configured": true, 00:22:20.700 "data_offset": 2048, 00:22:20.700 "data_size": 63488 00:22:20.700 }, 00:22:20.700 { 00:22:20.700 "name": "BaseBdev3", 00:22:20.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.700 "is_configured": false, 00:22:20.700 "data_offset": 0, 00:22:20.700 "data_size": 0 00:22:20.700 }, 00:22:20.700 { 00:22:20.700 "name": "BaseBdev4", 00:22:20.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.700 "is_configured": false, 00:22:20.700 "data_offset": 0, 00:22:20.700 "data_size": 0 00:22:20.700 } 00:22:20.700 ] 00:22:20.700 }' 00:22:20.700 04:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:20.700 04:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.286 04:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:21.286 [2024-07-23 04:17:30.013862] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:21.286 BaseBdev3 00:22:21.286 04:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:21.286 04:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:21.286 04:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:21.286 04:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:21.286 04:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:21.286 04:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:21.286 04:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:21.545 04:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:21.804 [ 00:22:21.804 { 00:22:21.804 "name": "BaseBdev3", 00:22:21.804 "aliases": [ 00:22:21.804 "f51978be-5f57-432e-8668-802e5a93435e" 00:22:21.804 ], 00:22:21.804 "product_name": "Malloc disk", 00:22:21.804 "block_size": 512, 00:22:21.804 "num_blocks": 65536, 00:22:21.804 "uuid": "f51978be-5f57-432e-8668-802e5a93435e", 00:22:21.804 "assigned_rate_limits": { 00:22:21.804 "rw_ios_per_sec": 0, 00:22:21.804 "rw_mbytes_per_sec": 0, 00:22:21.804 "r_mbytes_per_sec": 0, 00:22:21.804 "w_mbytes_per_sec": 0 00:22:21.804 }, 00:22:21.804 "claimed": true, 00:22:21.804 "claim_type": "exclusive_write", 00:22:21.804 "zoned": false, 00:22:21.804 "supported_io_types": { 00:22:21.804 "read": true, 00:22:21.804 "write": true, 00:22:21.804 "unmap": true, 00:22:21.804 "flush": true, 00:22:21.804 "reset": true, 00:22:21.804 "nvme_admin": false, 00:22:21.804 "nvme_io": false, 00:22:21.804 "nvme_io_md": false, 00:22:21.804 "write_zeroes": true, 00:22:21.804 "zcopy": true, 00:22:21.804 "get_zone_info": false, 00:22:21.804 "zone_management": false, 00:22:21.804 "zone_append": false, 00:22:21.804 "compare": false, 00:22:21.804 "compare_and_write": false, 00:22:21.804 "abort": true, 00:22:21.804 "seek_hole": false, 00:22:21.804 "seek_data": false, 00:22:21.804 "copy": true, 00:22:21.804 "nvme_iov_md": false 00:22:21.804 }, 00:22:21.804 "memory_domains": [ 00:22:21.804 { 00:22:21.804 "dma_device_id": "system", 00:22:21.804 "dma_device_type": 1 00:22:21.804 }, 00:22:21.804 { 00:22:21.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.804 "dma_device_type": 2 00:22:21.804 } 00:22:21.804 ], 00:22:21.804 "driver_specific": {} 00:22:21.804 } 00:22:21.804 ] 00:22:21.804 04:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:21.804 04:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:21.804 04:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:21.804 04:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:21.805 04:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:21.805 04:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:21.805 04:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:21.805 04:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:21.805 04:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:21.805 04:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:21.805 04:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:21.805 04:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:21.805 04:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:21.805 04:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.805 04:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:22.064 04:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:22.064 "name": "Existed_Raid", 00:22:22.064 "uuid": "1ce2f745-5d02-4ee1-a2b0-59d1c6dfe2a4", 00:22:22.064 "strip_size_kb": 64, 00:22:22.064 "state": "configuring", 00:22:22.064 "raid_level": "raid0", 00:22:22.064 "superblock": true, 00:22:22.064 "num_base_bdevs": 4, 00:22:22.064 "num_base_bdevs_discovered": 3, 00:22:22.064 "num_base_bdevs_operational": 4, 00:22:22.064 "base_bdevs_list": [ 00:22:22.064 { 00:22:22.064 "name": "BaseBdev1", 00:22:22.064 "uuid": "d9df8d9e-eefe-41d6-bc66-a7e5ccbb1d8e", 00:22:22.064 "is_configured": true, 00:22:22.064 "data_offset": 2048, 00:22:22.064 "data_size": 63488 00:22:22.064 }, 00:22:22.064 { 00:22:22.064 "name": "BaseBdev2", 00:22:22.064 "uuid": "ee15302c-55f4-43ad-b520-8a88fc57114b", 00:22:22.064 "is_configured": true, 00:22:22.064 "data_offset": 2048, 00:22:22.064 "data_size": 63488 00:22:22.064 }, 00:22:22.064 { 00:22:22.064 "name": "BaseBdev3", 00:22:22.064 "uuid": "f51978be-5f57-432e-8668-802e5a93435e", 00:22:22.064 "is_configured": true, 00:22:22.064 "data_offset": 2048, 00:22:22.064 "data_size": 63488 00:22:22.064 }, 00:22:22.064 { 00:22:22.064 "name": "BaseBdev4", 00:22:22.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.064 "is_configured": false, 00:22:22.064 "data_offset": 0, 00:22:22.064 "data_size": 0 00:22:22.064 } 00:22:22.064 ] 00:22:22.064 }' 00:22:22.064 04:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:22.064 04:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.000 04:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:23.259 [2024-07-23 04:17:31.793227] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:23.259 [2024-07-23 04:17:31.793528] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:22:23.259 [2024-07-23 04:17:31.793548] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:23.259 [2024-07-23 04:17:31.793874] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:22:23.259 [2024-07-23 04:17:31.794106] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:22:23.259 [2024-07-23 04:17:31.794125] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:22:23.259 [2024-07-23 04:17:31.794307] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:23.259 BaseBdev4 00:22:23.259 04:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:22:23.259 04:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:22:23.259 04:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:23.259 04:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:23.259 04:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:23.259 04:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:23.259 04:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:23.259 04:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:23.518 [ 00:22:23.518 { 00:22:23.518 "name": "BaseBdev4", 00:22:23.518 "aliases": [ 00:22:23.518 "9d03085e-94cb-4208-8342-6e4b6aa0a21a" 00:22:23.518 ], 00:22:23.518 "product_name": "Malloc disk", 00:22:23.518 "block_size": 512, 00:22:23.518 "num_blocks": 65536, 00:22:23.518 "uuid": "9d03085e-94cb-4208-8342-6e4b6aa0a21a", 00:22:23.518 "assigned_rate_limits": { 00:22:23.518 "rw_ios_per_sec": 0, 00:22:23.518 "rw_mbytes_per_sec": 0, 00:22:23.518 "r_mbytes_per_sec": 0, 00:22:23.518 "w_mbytes_per_sec": 0 00:22:23.518 }, 00:22:23.518 "claimed": true, 00:22:23.518 "claim_type": "exclusive_write", 00:22:23.518 "zoned": false, 00:22:23.518 "supported_io_types": { 00:22:23.518 "read": true, 00:22:23.518 "write": true, 00:22:23.518 "unmap": true, 00:22:23.518 "flush": true, 00:22:23.518 "reset": true, 00:22:23.518 "nvme_admin": false, 00:22:23.518 "nvme_io": false, 00:22:23.518 "nvme_io_md": false, 00:22:23.518 "write_zeroes": true, 00:22:23.518 "zcopy": true, 00:22:23.518 "get_zone_info": false, 00:22:23.518 "zone_management": false, 00:22:23.518 "zone_append": false, 00:22:23.518 "compare": false, 00:22:23.518 "compare_and_write": false, 00:22:23.518 "abort": true, 00:22:23.518 "seek_hole": false, 00:22:23.518 "seek_data": false, 00:22:23.518 "copy": true, 00:22:23.518 "nvme_iov_md": false 00:22:23.518 }, 00:22:23.518 "memory_domains": [ 00:22:23.518 { 00:22:23.518 "dma_device_id": "system", 00:22:23.518 "dma_device_type": 1 00:22:23.518 }, 00:22:23.518 { 00:22:23.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:23.518 "dma_device_type": 2 00:22:23.518 } 00:22:23.518 ], 00:22:23.518 "driver_specific": {} 00:22:23.518 } 00:22:23.518 ] 00:22:23.518 04:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:23.518 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:23.518 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:23.518 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:23.518 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:23.518 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:23.518 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:23.518 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:23.518 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:23.518 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:23.518 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:23.518 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:23.518 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:23.518 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.518 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:23.777 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:23.777 "name": "Existed_Raid", 00:22:23.777 "uuid": "1ce2f745-5d02-4ee1-a2b0-59d1c6dfe2a4", 00:22:23.777 "strip_size_kb": 64, 00:22:23.777 "state": "online", 00:22:23.777 "raid_level": "raid0", 00:22:23.777 "superblock": true, 00:22:23.777 "num_base_bdevs": 4, 00:22:23.777 "num_base_bdevs_discovered": 4, 00:22:23.777 "num_base_bdevs_operational": 4, 00:22:23.777 "base_bdevs_list": [ 00:22:23.777 { 00:22:23.777 "name": "BaseBdev1", 00:22:23.777 "uuid": "d9df8d9e-eefe-41d6-bc66-a7e5ccbb1d8e", 00:22:23.777 "is_configured": true, 00:22:23.777 "data_offset": 2048, 00:22:23.777 "data_size": 63488 00:22:23.777 }, 00:22:23.777 { 00:22:23.777 "name": "BaseBdev2", 00:22:23.777 "uuid": "ee15302c-55f4-43ad-b520-8a88fc57114b", 00:22:23.777 "is_configured": true, 00:22:23.777 "data_offset": 2048, 00:22:23.777 "data_size": 63488 00:22:23.777 }, 00:22:23.777 { 00:22:23.777 "name": "BaseBdev3", 00:22:23.777 "uuid": "f51978be-5f57-432e-8668-802e5a93435e", 00:22:23.777 "is_configured": true, 00:22:23.777 "data_offset": 2048, 00:22:23.777 "data_size": 63488 00:22:23.777 }, 00:22:23.777 { 00:22:23.777 "name": "BaseBdev4", 00:22:23.777 "uuid": "9d03085e-94cb-4208-8342-6e4b6aa0a21a", 00:22:23.777 "is_configured": true, 00:22:23.777 "data_offset": 2048, 00:22:23.777 "data_size": 63488 00:22:23.777 } 00:22:23.777 ] 00:22:23.777 }' 00:22:23.777 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:23.777 04:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.345 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:24.345 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:24.345 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:24.345 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:24.345 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:24.345 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:24.345 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:24.345 04:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:24.604 [2024-07-23 04:17:33.181481] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:24.604 04:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:24.604 "name": "Existed_Raid", 00:22:24.604 "aliases": [ 00:22:24.604 "1ce2f745-5d02-4ee1-a2b0-59d1c6dfe2a4" 00:22:24.604 ], 00:22:24.604 "product_name": "Raid Volume", 00:22:24.604 "block_size": 512, 00:22:24.604 "num_blocks": 253952, 00:22:24.604 "uuid": "1ce2f745-5d02-4ee1-a2b0-59d1c6dfe2a4", 00:22:24.604 "assigned_rate_limits": { 00:22:24.604 "rw_ios_per_sec": 0, 00:22:24.604 "rw_mbytes_per_sec": 0, 00:22:24.604 "r_mbytes_per_sec": 0, 00:22:24.604 "w_mbytes_per_sec": 0 00:22:24.604 }, 00:22:24.604 "claimed": false, 00:22:24.604 "zoned": false, 00:22:24.604 "supported_io_types": { 00:22:24.604 "read": true, 00:22:24.604 "write": true, 00:22:24.604 "unmap": true, 00:22:24.604 "flush": true, 00:22:24.604 "reset": true, 00:22:24.604 "nvme_admin": false, 00:22:24.604 "nvme_io": false, 00:22:24.604 "nvme_io_md": false, 00:22:24.604 "write_zeroes": true, 00:22:24.604 "zcopy": false, 00:22:24.604 "get_zone_info": false, 00:22:24.604 "zone_management": false, 00:22:24.604 "zone_append": false, 00:22:24.604 "compare": false, 00:22:24.604 "compare_and_write": false, 00:22:24.604 "abort": false, 00:22:24.604 "seek_hole": false, 00:22:24.604 "seek_data": false, 00:22:24.604 "copy": false, 00:22:24.604 "nvme_iov_md": false 00:22:24.604 }, 00:22:24.604 "memory_domains": [ 00:22:24.604 { 00:22:24.604 "dma_device_id": "system", 00:22:24.604 "dma_device_type": 1 00:22:24.604 }, 00:22:24.604 { 00:22:24.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.604 "dma_device_type": 2 00:22:24.604 }, 00:22:24.604 { 00:22:24.604 "dma_device_id": "system", 00:22:24.604 "dma_device_type": 1 00:22:24.604 }, 00:22:24.604 { 00:22:24.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.604 "dma_device_type": 2 00:22:24.604 }, 00:22:24.604 { 00:22:24.604 "dma_device_id": "system", 00:22:24.604 "dma_device_type": 1 00:22:24.604 }, 00:22:24.604 { 00:22:24.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.604 "dma_device_type": 2 00:22:24.604 }, 00:22:24.604 { 00:22:24.604 "dma_device_id": "system", 00:22:24.604 "dma_device_type": 1 00:22:24.604 }, 00:22:24.604 { 00:22:24.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.604 "dma_device_type": 2 00:22:24.604 } 00:22:24.604 ], 00:22:24.604 "driver_specific": { 00:22:24.604 "raid": { 00:22:24.604 "uuid": "1ce2f745-5d02-4ee1-a2b0-59d1c6dfe2a4", 00:22:24.604 "strip_size_kb": 64, 00:22:24.604 "state": "online", 00:22:24.604 "raid_level": "raid0", 00:22:24.605 "superblock": true, 00:22:24.605 "num_base_bdevs": 4, 00:22:24.605 "num_base_bdevs_discovered": 4, 00:22:24.605 "num_base_bdevs_operational": 4, 00:22:24.605 "base_bdevs_list": [ 00:22:24.605 { 00:22:24.605 "name": "BaseBdev1", 00:22:24.605 "uuid": "d9df8d9e-eefe-41d6-bc66-a7e5ccbb1d8e", 00:22:24.605 "is_configured": true, 00:22:24.605 "data_offset": 2048, 00:22:24.605 "data_size": 63488 00:22:24.605 }, 00:22:24.605 { 00:22:24.605 "name": "BaseBdev2", 00:22:24.605 "uuid": "ee15302c-55f4-43ad-b520-8a88fc57114b", 00:22:24.605 "is_configured": true, 00:22:24.605 "data_offset": 2048, 00:22:24.605 "data_size": 63488 00:22:24.605 }, 00:22:24.605 { 00:22:24.605 "name": "BaseBdev3", 00:22:24.605 "uuid": "f51978be-5f57-432e-8668-802e5a93435e", 00:22:24.605 "is_configured": true, 00:22:24.605 "data_offset": 2048, 00:22:24.605 "data_size": 63488 00:22:24.605 }, 00:22:24.605 { 00:22:24.605 "name": "BaseBdev4", 00:22:24.605 "uuid": "9d03085e-94cb-4208-8342-6e4b6aa0a21a", 00:22:24.605 "is_configured": true, 00:22:24.605 "data_offset": 2048, 00:22:24.605 "data_size": 63488 00:22:24.605 } 00:22:24.605 ] 00:22:24.605 } 00:22:24.605 } 00:22:24.605 }' 00:22:24.605 04:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:24.605 04:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:24.605 BaseBdev2 00:22:24.605 BaseBdev3 00:22:24.605 BaseBdev4' 00:22:24.605 04:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:24.605 04:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:24.605 04:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:24.863 04:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:24.863 "name": "BaseBdev1", 00:22:24.864 "aliases": [ 00:22:24.864 "d9df8d9e-eefe-41d6-bc66-a7e5ccbb1d8e" 00:22:24.864 ], 00:22:24.864 "product_name": "Malloc disk", 00:22:24.864 "block_size": 512, 00:22:24.864 "num_blocks": 65536, 00:22:24.864 "uuid": "d9df8d9e-eefe-41d6-bc66-a7e5ccbb1d8e", 00:22:24.864 "assigned_rate_limits": { 00:22:24.864 "rw_ios_per_sec": 0, 00:22:24.864 "rw_mbytes_per_sec": 0, 00:22:24.864 "r_mbytes_per_sec": 0, 00:22:24.864 "w_mbytes_per_sec": 0 00:22:24.864 }, 00:22:24.864 "claimed": true, 00:22:24.864 "claim_type": "exclusive_write", 00:22:24.864 "zoned": false, 00:22:24.864 "supported_io_types": { 00:22:24.864 "read": true, 00:22:24.864 "write": true, 00:22:24.864 "unmap": true, 00:22:24.864 "flush": true, 00:22:24.864 "reset": true, 00:22:24.864 "nvme_admin": false, 00:22:24.864 "nvme_io": false, 00:22:24.864 "nvme_io_md": false, 00:22:24.864 "write_zeroes": true, 00:22:24.864 "zcopy": true, 00:22:24.864 "get_zone_info": false, 00:22:24.864 "zone_management": false, 00:22:24.864 "zone_append": false, 00:22:24.864 "compare": false, 00:22:24.864 "compare_and_write": false, 00:22:24.864 "abort": true, 00:22:24.864 "seek_hole": false, 00:22:24.864 "seek_data": false, 00:22:24.864 "copy": true, 00:22:24.864 "nvme_iov_md": false 00:22:24.864 }, 00:22:24.864 "memory_domains": [ 00:22:24.864 { 00:22:24.864 "dma_device_id": "system", 00:22:24.864 "dma_device_type": 1 00:22:24.864 }, 00:22:24.864 { 00:22:24.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.864 "dma_device_type": 2 00:22:24.864 } 00:22:24.864 ], 00:22:24.864 "driver_specific": {} 00:22:24.864 }' 00:22:24.864 04:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:24.864 04:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:24.864 04:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:24.864 04:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:24.864 04:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:24.864 04:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:24.864 04:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:25.123 04:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:25.123 04:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:25.123 04:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:25.123 04:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:25.123 04:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:25.123 04:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:25.123 04:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:25.123 04:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:25.381 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:25.381 "name": "BaseBdev2", 00:22:25.381 "aliases": [ 00:22:25.381 "ee15302c-55f4-43ad-b520-8a88fc57114b" 00:22:25.381 ], 00:22:25.381 "product_name": "Malloc disk", 00:22:25.381 "block_size": 512, 00:22:25.381 "num_blocks": 65536, 00:22:25.381 "uuid": "ee15302c-55f4-43ad-b520-8a88fc57114b", 00:22:25.381 "assigned_rate_limits": { 00:22:25.381 "rw_ios_per_sec": 0, 00:22:25.381 "rw_mbytes_per_sec": 0, 00:22:25.381 "r_mbytes_per_sec": 0, 00:22:25.381 "w_mbytes_per_sec": 0 00:22:25.381 }, 00:22:25.381 "claimed": true, 00:22:25.381 "claim_type": "exclusive_write", 00:22:25.381 "zoned": false, 00:22:25.381 "supported_io_types": { 00:22:25.381 "read": true, 00:22:25.381 "write": true, 00:22:25.381 "unmap": true, 00:22:25.381 "flush": true, 00:22:25.381 "reset": true, 00:22:25.381 "nvme_admin": false, 00:22:25.381 "nvme_io": false, 00:22:25.381 "nvme_io_md": false, 00:22:25.381 "write_zeroes": true, 00:22:25.381 "zcopy": true, 00:22:25.381 "get_zone_info": false, 00:22:25.381 "zone_management": false, 00:22:25.381 "zone_append": false, 00:22:25.381 "compare": false, 00:22:25.381 "compare_and_write": false, 00:22:25.381 "abort": true, 00:22:25.381 "seek_hole": false, 00:22:25.381 "seek_data": false, 00:22:25.381 "copy": true, 00:22:25.381 "nvme_iov_md": false 00:22:25.381 }, 00:22:25.381 "memory_domains": [ 00:22:25.381 { 00:22:25.381 "dma_device_id": "system", 00:22:25.381 "dma_device_type": 1 00:22:25.381 }, 00:22:25.381 { 00:22:25.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.381 "dma_device_type": 2 00:22:25.381 } 00:22:25.381 ], 00:22:25.381 "driver_specific": {} 00:22:25.381 }' 00:22:25.381 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:25.381 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:25.381 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:25.381 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:25.640 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:25.640 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:25.640 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:25.640 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:25.640 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:25.640 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:25.640 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:25.640 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:25.640 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:25.640 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:25.640 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:25.899 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:25.899 "name": "BaseBdev3", 00:22:25.899 "aliases": [ 00:22:25.899 "f51978be-5f57-432e-8668-802e5a93435e" 00:22:25.899 ], 00:22:25.899 "product_name": "Malloc disk", 00:22:25.899 "block_size": 512, 00:22:25.899 "num_blocks": 65536, 00:22:25.899 "uuid": "f51978be-5f57-432e-8668-802e5a93435e", 00:22:25.899 "assigned_rate_limits": { 00:22:25.899 "rw_ios_per_sec": 0, 00:22:25.899 "rw_mbytes_per_sec": 0, 00:22:25.899 "r_mbytes_per_sec": 0, 00:22:25.899 "w_mbytes_per_sec": 0 00:22:25.899 }, 00:22:25.899 "claimed": true, 00:22:25.899 "claim_type": "exclusive_write", 00:22:25.899 "zoned": false, 00:22:25.899 "supported_io_types": { 00:22:25.899 "read": true, 00:22:25.899 "write": true, 00:22:25.899 "unmap": true, 00:22:25.899 "flush": true, 00:22:25.899 "reset": true, 00:22:25.899 "nvme_admin": false, 00:22:25.899 "nvme_io": false, 00:22:25.899 "nvme_io_md": false, 00:22:25.899 "write_zeroes": true, 00:22:25.899 "zcopy": true, 00:22:25.899 "get_zone_info": false, 00:22:25.899 "zone_management": false, 00:22:25.899 "zone_append": false, 00:22:25.899 "compare": false, 00:22:25.899 "compare_and_write": false, 00:22:25.899 "abort": true, 00:22:25.899 "seek_hole": false, 00:22:25.899 "seek_data": false, 00:22:25.899 "copy": true, 00:22:25.899 "nvme_iov_md": false 00:22:25.899 }, 00:22:25.899 "memory_domains": [ 00:22:25.899 { 00:22:25.899 "dma_device_id": "system", 00:22:25.899 "dma_device_type": 1 00:22:25.899 }, 00:22:25.899 { 00:22:25.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.899 "dma_device_type": 2 00:22:25.899 } 00:22:25.899 ], 00:22:25.899 "driver_specific": {} 00:22:25.899 }' 00:22:25.899 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:25.899 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:26.158 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:26.158 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:26.158 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:26.158 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:26.158 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:26.158 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:26.158 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:26.158 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:26.158 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:26.417 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:26.417 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:26.417 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:26.417 04:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:26.417 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:26.417 "name": "BaseBdev4", 00:22:26.417 "aliases": [ 00:22:26.417 "9d03085e-94cb-4208-8342-6e4b6aa0a21a" 00:22:26.417 ], 00:22:26.417 "product_name": "Malloc disk", 00:22:26.417 "block_size": 512, 00:22:26.417 "num_blocks": 65536, 00:22:26.417 "uuid": "9d03085e-94cb-4208-8342-6e4b6aa0a21a", 00:22:26.417 "assigned_rate_limits": { 00:22:26.417 "rw_ios_per_sec": 0, 00:22:26.417 "rw_mbytes_per_sec": 0, 00:22:26.417 "r_mbytes_per_sec": 0, 00:22:26.417 "w_mbytes_per_sec": 0 00:22:26.417 }, 00:22:26.417 "claimed": true, 00:22:26.417 "claim_type": "exclusive_write", 00:22:26.417 "zoned": false, 00:22:26.417 "supported_io_types": { 00:22:26.417 "read": true, 00:22:26.417 "write": true, 00:22:26.417 "unmap": true, 00:22:26.417 "flush": true, 00:22:26.417 "reset": true, 00:22:26.417 "nvme_admin": false, 00:22:26.417 "nvme_io": false, 00:22:26.417 "nvme_io_md": false, 00:22:26.417 "write_zeroes": true, 00:22:26.417 "zcopy": true, 00:22:26.417 "get_zone_info": false, 00:22:26.417 "zone_management": false, 00:22:26.417 "zone_append": false, 00:22:26.417 "compare": false, 00:22:26.417 "compare_and_write": false, 00:22:26.417 "abort": true, 00:22:26.417 "seek_hole": false, 00:22:26.417 "seek_data": false, 00:22:26.417 "copy": true, 00:22:26.417 "nvme_iov_md": false 00:22:26.417 }, 00:22:26.417 "memory_domains": [ 00:22:26.417 { 00:22:26.417 "dma_device_id": "system", 00:22:26.417 "dma_device_type": 1 00:22:26.417 }, 00:22:26.417 { 00:22:26.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.417 "dma_device_type": 2 00:22:26.417 } 00:22:26.417 ], 00:22:26.417 "driver_specific": {} 00:22:26.417 }' 00:22:26.417 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:26.675 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:26.675 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:26.676 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:26.676 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:26.676 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:26.676 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:26.676 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:26.676 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:26.676 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:26.935 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:26.935 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:26.935 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:27.194 [2024-07-23 04:17:35.740037] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:27.194 [2024-07-23 04:17:35.740073] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:27.194 [2024-07-23 04:17:35.740129] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:27.194 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:27.194 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:22:27.194 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:27.194 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:22:27.194 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:22:27.194 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:22:27.194 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:27.194 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:22:27.194 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:27.194 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:27.194 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:27.194 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:27.194 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:27.194 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:27.194 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:27.194 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.194 04:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:27.453 04:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:27.453 "name": "Existed_Raid", 00:22:27.453 "uuid": "1ce2f745-5d02-4ee1-a2b0-59d1c6dfe2a4", 00:22:27.453 "strip_size_kb": 64, 00:22:27.453 "state": "offline", 00:22:27.453 "raid_level": "raid0", 00:22:27.453 "superblock": true, 00:22:27.453 "num_base_bdevs": 4, 00:22:27.453 "num_base_bdevs_discovered": 3, 00:22:27.453 "num_base_bdevs_operational": 3, 00:22:27.453 "base_bdevs_list": [ 00:22:27.453 { 00:22:27.453 "name": null, 00:22:27.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.453 "is_configured": false, 00:22:27.453 "data_offset": 2048, 00:22:27.453 "data_size": 63488 00:22:27.453 }, 00:22:27.453 { 00:22:27.453 "name": "BaseBdev2", 00:22:27.453 "uuid": "ee15302c-55f4-43ad-b520-8a88fc57114b", 00:22:27.453 "is_configured": true, 00:22:27.453 "data_offset": 2048, 00:22:27.453 "data_size": 63488 00:22:27.453 }, 00:22:27.453 { 00:22:27.453 "name": "BaseBdev3", 00:22:27.453 "uuid": "f51978be-5f57-432e-8668-802e5a93435e", 00:22:27.453 "is_configured": true, 00:22:27.453 "data_offset": 2048, 00:22:27.453 "data_size": 63488 00:22:27.453 }, 00:22:27.453 { 00:22:27.453 "name": "BaseBdev4", 00:22:27.453 "uuid": "9d03085e-94cb-4208-8342-6e4b6aa0a21a", 00:22:27.453 "is_configured": true, 00:22:27.453 "data_offset": 2048, 00:22:27.453 "data_size": 63488 00:22:27.453 } 00:22:27.453 ] 00:22:27.453 }' 00:22:27.453 04:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:27.453 04:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.022 04:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:28.022 04:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:28.022 04:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:28.022 04:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.281 04:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:28.281 04:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:28.281 04:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:28.281 [2024-07-23 04:17:37.050789] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:28.541 04:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:28.541 04:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:28.541 04:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.541 04:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:28.799 04:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:28.799 04:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:28.799 04:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:29.058 [2024-07-23 04:17:37.643642] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:29.058 04:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:29.058 04:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:29.058 04:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.058 04:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:29.317 04:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:29.317 04:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:29.317 04:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:29.585 [2024-07-23 04:17:38.239115] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:29.585 [2024-07-23 04:17:38.239180] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:22:29.843 04:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:29.843 04:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:29.843 04:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.843 04:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:29.843 04:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:29.843 04:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:29.843 04:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:22:29.843 04:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:29.843 04:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:29.843 04:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:30.101 BaseBdev2 00:22:30.101 04:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:30.101 04:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:30.101 04:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:30.101 04:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:30.101 04:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:30.101 04:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:30.101 04:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:30.358 04:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:30.616 [ 00:22:30.617 { 00:22:30.617 "name": "BaseBdev2", 00:22:30.617 "aliases": [ 00:22:30.617 "15bcf6f1-41b8-413e-8139-650bd0193278" 00:22:30.617 ], 00:22:30.617 "product_name": "Malloc disk", 00:22:30.617 "block_size": 512, 00:22:30.617 "num_blocks": 65536, 00:22:30.617 "uuid": "15bcf6f1-41b8-413e-8139-650bd0193278", 00:22:30.617 "assigned_rate_limits": { 00:22:30.617 "rw_ios_per_sec": 0, 00:22:30.617 "rw_mbytes_per_sec": 0, 00:22:30.617 "r_mbytes_per_sec": 0, 00:22:30.617 "w_mbytes_per_sec": 0 00:22:30.617 }, 00:22:30.617 "claimed": false, 00:22:30.617 "zoned": false, 00:22:30.617 "supported_io_types": { 00:22:30.617 "read": true, 00:22:30.617 "write": true, 00:22:30.617 "unmap": true, 00:22:30.617 "flush": true, 00:22:30.617 "reset": true, 00:22:30.617 "nvme_admin": false, 00:22:30.617 "nvme_io": false, 00:22:30.617 "nvme_io_md": false, 00:22:30.617 "write_zeroes": true, 00:22:30.617 "zcopy": true, 00:22:30.617 "get_zone_info": false, 00:22:30.617 "zone_management": false, 00:22:30.617 "zone_append": false, 00:22:30.617 "compare": false, 00:22:30.617 "compare_and_write": false, 00:22:30.617 "abort": true, 00:22:30.617 "seek_hole": false, 00:22:30.617 "seek_data": false, 00:22:30.617 "copy": true, 00:22:30.617 "nvme_iov_md": false 00:22:30.617 }, 00:22:30.617 "memory_domains": [ 00:22:30.617 { 00:22:30.617 "dma_device_id": "system", 00:22:30.617 "dma_device_type": 1 00:22:30.617 }, 00:22:30.617 { 00:22:30.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:30.617 "dma_device_type": 2 00:22:30.617 } 00:22:30.617 ], 00:22:30.617 "driver_specific": {} 00:22:30.617 } 00:22:30.617 ] 00:22:30.617 04:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:30.617 04:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:30.617 04:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:30.617 04:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:30.875 BaseBdev3 00:22:30.875 04:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:30.875 04:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:30.875 04:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:30.875 04:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:30.875 04:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:30.875 04:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:30.875 04:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:31.134 04:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:31.392 [ 00:22:31.392 { 00:22:31.392 "name": "BaseBdev3", 00:22:31.392 "aliases": [ 00:22:31.392 "f36c7ed2-64eb-4f56-8c3c-9c8315459315" 00:22:31.392 ], 00:22:31.392 "product_name": "Malloc disk", 00:22:31.392 "block_size": 512, 00:22:31.392 "num_blocks": 65536, 00:22:31.392 "uuid": "f36c7ed2-64eb-4f56-8c3c-9c8315459315", 00:22:31.392 "assigned_rate_limits": { 00:22:31.392 "rw_ios_per_sec": 0, 00:22:31.392 "rw_mbytes_per_sec": 0, 00:22:31.392 "r_mbytes_per_sec": 0, 00:22:31.392 "w_mbytes_per_sec": 0 00:22:31.392 }, 00:22:31.392 "claimed": false, 00:22:31.392 "zoned": false, 00:22:31.392 "supported_io_types": { 00:22:31.392 "read": true, 00:22:31.392 "write": true, 00:22:31.392 "unmap": true, 00:22:31.392 "flush": true, 00:22:31.392 "reset": true, 00:22:31.392 "nvme_admin": false, 00:22:31.392 "nvme_io": false, 00:22:31.392 "nvme_io_md": false, 00:22:31.392 "write_zeroes": true, 00:22:31.392 "zcopy": true, 00:22:31.392 "get_zone_info": false, 00:22:31.392 "zone_management": false, 00:22:31.392 "zone_append": false, 00:22:31.392 "compare": false, 00:22:31.392 "compare_and_write": false, 00:22:31.392 "abort": true, 00:22:31.392 "seek_hole": false, 00:22:31.392 "seek_data": false, 00:22:31.392 "copy": true, 00:22:31.392 "nvme_iov_md": false 00:22:31.392 }, 00:22:31.392 "memory_domains": [ 00:22:31.392 { 00:22:31.392 "dma_device_id": "system", 00:22:31.392 "dma_device_type": 1 00:22:31.392 }, 00:22:31.392 { 00:22:31.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.392 "dma_device_type": 2 00:22:31.392 } 00:22:31.392 ], 00:22:31.392 "driver_specific": {} 00:22:31.392 } 00:22:31.392 ] 00:22:31.392 04:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:31.392 04:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:31.392 04:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:31.392 04:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:31.651 BaseBdev4 00:22:31.651 04:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:22:31.651 04:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:22:31.651 04:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:31.651 04:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:31.651 04:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:31.651 04:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:31.651 04:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:31.909 04:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:32.168 [ 00:22:32.168 { 00:22:32.168 "name": "BaseBdev4", 00:22:32.168 "aliases": [ 00:22:32.168 "48c9c015-b669-4b38-9cc3-984a6add4ee0" 00:22:32.168 ], 00:22:32.168 "product_name": "Malloc disk", 00:22:32.168 "block_size": 512, 00:22:32.168 "num_blocks": 65536, 00:22:32.168 "uuid": "48c9c015-b669-4b38-9cc3-984a6add4ee0", 00:22:32.168 "assigned_rate_limits": { 00:22:32.168 "rw_ios_per_sec": 0, 00:22:32.168 "rw_mbytes_per_sec": 0, 00:22:32.168 "r_mbytes_per_sec": 0, 00:22:32.168 "w_mbytes_per_sec": 0 00:22:32.168 }, 00:22:32.168 "claimed": false, 00:22:32.168 "zoned": false, 00:22:32.168 "supported_io_types": { 00:22:32.168 "read": true, 00:22:32.168 "write": true, 00:22:32.168 "unmap": true, 00:22:32.168 "flush": true, 00:22:32.168 "reset": true, 00:22:32.168 "nvme_admin": false, 00:22:32.168 "nvme_io": false, 00:22:32.168 "nvme_io_md": false, 00:22:32.168 "write_zeroes": true, 00:22:32.168 "zcopy": true, 00:22:32.168 "get_zone_info": false, 00:22:32.168 "zone_management": false, 00:22:32.168 "zone_append": false, 00:22:32.168 "compare": false, 00:22:32.168 "compare_and_write": false, 00:22:32.168 "abort": true, 00:22:32.168 "seek_hole": false, 00:22:32.168 "seek_data": false, 00:22:32.168 "copy": true, 00:22:32.168 "nvme_iov_md": false 00:22:32.168 }, 00:22:32.168 "memory_domains": [ 00:22:32.168 { 00:22:32.168 "dma_device_id": "system", 00:22:32.168 "dma_device_type": 1 00:22:32.168 }, 00:22:32.168 { 00:22:32.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:32.168 "dma_device_type": 2 00:22:32.168 } 00:22:32.168 ], 00:22:32.168 "driver_specific": {} 00:22:32.168 } 00:22:32.168 ] 00:22:32.168 04:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:32.168 04:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:32.168 04:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:32.168 04:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:32.429 [2024-07-23 04:17:40.994483] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:32.429 [2024-07-23 04:17:40.994528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:32.429 [2024-07-23 04:17:40.994561] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:32.429 [2024-07-23 04:17:40.996895] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:32.429 [2024-07-23 04:17:40.996954] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:32.429 04:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:32.429 04:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:32.429 04:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:32.429 04:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:32.429 04:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:32.429 04:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:32.429 04:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:32.429 04:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:32.429 04:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:32.429 04:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:32.429 04:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.429 04:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:32.693 04:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:32.693 "name": "Existed_Raid", 00:22:32.693 "uuid": "a1615495-ca30-435f-85a1-82c40b3362d2", 00:22:32.693 "strip_size_kb": 64, 00:22:32.693 "state": "configuring", 00:22:32.693 "raid_level": "raid0", 00:22:32.693 "superblock": true, 00:22:32.693 "num_base_bdevs": 4, 00:22:32.693 "num_base_bdevs_discovered": 3, 00:22:32.693 "num_base_bdevs_operational": 4, 00:22:32.693 "base_bdevs_list": [ 00:22:32.693 { 00:22:32.693 "name": "BaseBdev1", 00:22:32.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.693 "is_configured": false, 00:22:32.693 "data_offset": 0, 00:22:32.693 "data_size": 0 00:22:32.693 }, 00:22:32.693 { 00:22:32.693 "name": "BaseBdev2", 00:22:32.693 "uuid": "15bcf6f1-41b8-413e-8139-650bd0193278", 00:22:32.693 "is_configured": true, 00:22:32.693 "data_offset": 2048, 00:22:32.693 "data_size": 63488 00:22:32.693 }, 00:22:32.693 { 00:22:32.693 "name": "BaseBdev3", 00:22:32.693 "uuid": "f36c7ed2-64eb-4f56-8c3c-9c8315459315", 00:22:32.693 "is_configured": true, 00:22:32.693 "data_offset": 2048, 00:22:32.693 "data_size": 63488 00:22:32.693 }, 00:22:32.693 { 00:22:32.693 "name": "BaseBdev4", 00:22:32.693 "uuid": "48c9c015-b669-4b38-9cc3-984a6add4ee0", 00:22:32.693 "is_configured": true, 00:22:32.693 "data_offset": 2048, 00:22:32.693 "data_size": 63488 00:22:32.693 } 00:22:32.693 ] 00:22:32.693 }' 00:22:32.693 04:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:32.693 04:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.261 04:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:33.261 [2024-07-23 04:17:42.009165] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:33.261 04:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:33.261 04:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:33.261 04:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:33.261 04:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:33.261 04:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:33.261 04:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:33.261 04:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:33.261 04:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:33.261 04:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:33.261 04:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:33.261 04:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.261 04:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.520 04:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:33.520 "name": "Existed_Raid", 00:22:33.520 "uuid": "a1615495-ca30-435f-85a1-82c40b3362d2", 00:22:33.520 "strip_size_kb": 64, 00:22:33.520 "state": "configuring", 00:22:33.520 "raid_level": "raid0", 00:22:33.520 "superblock": true, 00:22:33.520 "num_base_bdevs": 4, 00:22:33.520 "num_base_bdevs_discovered": 2, 00:22:33.520 "num_base_bdevs_operational": 4, 00:22:33.520 "base_bdevs_list": [ 00:22:33.520 { 00:22:33.520 "name": "BaseBdev1", 00:22:33.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.520 "is_configured": false, 00:22:33.520 "data_offset": 0, 00:22:33.520 "data_size": 0 00:22:33.520 }, 00:22:33.520 { 00:22:33.520 "name": null, 00:22:33.520 "uuid": "15bcf6f1-41b8-413e-8139-650bd0193278", 00:22:33.520 "is_configured": false, 00:22:33.520 "data_offset": 2048, 00:22:33.520 "data_size": 63488 00:22:33.520 }, 00:22:33.520 { 00:22:33.520 "name": "BaseBdev3", 00:22:33.520 "uuid": "f36c7ed2-64eb-4f56-8c3c-9c8315459315", 00:22:33.520 "is_configured": true, 00:22:33.520 "data_offset": 2048, 00:22:33.520 "data_size": 63488 00:22:33.520 }, 00:22:33.520 { 00:22:33.520 "name": "BaseBdev4", 00:22:33.520 "uuid": "48c9c015-b669-4b38-9cc3-984a6add4ee0", 00:22:33.520 "is_configured": true, 00:22:33.520 "data_offset": 2048, 00:22:33.520 "data_size": 63488 00:22:33.520 } 00:22:33.520 ] 00:22:33.520 }' 00:22:33.520 04:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:33.520 04:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.088 04:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.088 04:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:34.346 04:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:34.346 04:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:34.605 [2024-07-23 04:17:43.313743] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:34.606 BaseBdev1 00:22:34.606 04:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:34.606 04:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:34.606 04:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:34.606 04:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:34.606 04:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:34.606 04:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:34.606 04:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:34.865 04:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:35.124 [ 00:22:35.124 { 00:22:35.124 "name": "BaseBdev1", 00:22:35.124 "aliases": [ 00:22:35.124 "725a2481-c472-4352-9736-586fc66c163d" 00:22:35.124 ], 00:22:35.124 "product_name": "Malloc disk", 00:22:35.124 "block_size": 512, 00:22:35.124 "num_blocks": 65536, 00:22:35.124 "uuid": "725a2481-c472-4352-9736-586fc66c163d", 00:22:35.124 "assigned_rate_limits": { 00:22:35.124 "rw_ios_per_sec": 0, 00:22:35.124 "rw_mbytes_per_sec": 0, 00:22:35.124 "r_mbytes_per_sec": 0, 00:22:35.124 "w_mbytes_per_sec": 0 00:22:35.124 }, 00:22:35.124 "claimed": true, 00:22:35.124 "claim_type": "exclusive_write", 00:22:35.124 "zoned": false, 00:22:35.124 "supported_io_types": { 00:22:35.124 "read": true, 00:22:35.124 "write": true, 00:22:35.124 "unmap": true, 00:22:35.124 "flush": true, 00:22:35.124 "reset": true, 00:22:35.124 "nvme_admin": false, 00:22:35.124 "nvme_io": false, 00:22:35.124 "nvme_io_md": false, 00:22:35.124 "write_zeroes": true, 00:22:35.124 "zcopy": true, 00:22:35.124 "get_zone_info": false, 00:22:35.124 "zone_management": false, 00:22:35.124 "zone_append": false, 00:22:35.124 "compare": false, 00:22:35.124 "compare_and_write": false, 00:22:35.124 "abort": true, 00:22:35.124 "seek_hole": false, 00:22:35.124 "seek_data": false, 00:22:35.124 "copy": true, 00:22:35.124 "nvme_iov_md": false 00:22:35.124 }, 00:22:35.124 "memory_domains": [ 00:22:35.124 { 00:22:35.124 "dma_device_id": "system", 00:22:35.124 "dma_device_type": 1 00:22:35.124 }, 00:22:35.124 { 00:22:35.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.124 "dma_device_type": 2 00:22:35.124 } 00:22:35.124 ], 00:22:35.124 "driver_specific": {} 00:22:35.124 } 00:22:35.124 ] 00:22:35.124 04:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:35.124 04:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:35.124 04:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:35.124 04:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:35.124 04:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:35.124 04:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:35.124 04:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:35.124 04:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:35.124 04:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:35.124 04:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:35.124 04:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:35.124 04:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.124 04:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:35.383 04:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:35.383 "name": "Existed_Raid", 00:22:35.383 "uuid": "a1615495-ca30-435f-85a1-82c40b3362d2", 00:22:35.383 "strip_size_kb": 64, 00:22:35.383 "state": "configuring", 00:22:35.383 "raid_level": "raid0", 00:22:35.383 "superblock": true, 00:22:35.383 "num_base_bdevs": 4, 00:22:35.383 "num_base_bdevs_discovered": 3, 00:22:35.383 "num_base_bdevs_operational": 4, 00:22:35.383 "base_bdevs_list": [ 00:22:35.383 { 00:22:35.383 "name": "BaseBdev1", 00:22:35.383 "uuid": "725a2481-c472-4352-9736-586fc66c163d", 00:22:35.383 "is_configured": true, 00:22:35.383 "data_offset": 2048, 00:22:35.383 "data_size": 63488 00:22:35.383 }, 00:22:35.383 { 00:22:35.383 "name": null, 00:22:35.383 "uuid": "15bcf6f1-41b8-413e-8139-650bd0193278", 00:22:35.383 "is_configured": false, 00:22:35.383 "data_offset": 2048, 00:22:35.383 "data_size": 63488 00:22:35.383 }, 00:22:35.383 { 00:22:35.383 "name": "BaseBdev3", 00:22:35.383 "uuid": "f36c7ed2-64eb-4f56-8c3c-9c8315459315", 00:22:35.383 "is_configured": true, 00:22:35.383 "data_offset": 2048, 00:22:35.383 "data_size": 63488 00:22:35.383 }, 00:22:35.383 { 00:22:35.383 "name": "BaseBdev4", 00:22:35.383 "uuid": "48c9c015-b669-4b38-9cc3-984a6add4ee0", 00:22:35.383 "is_configured": true, 00:22:35.383 "data_offset": 2048, 00:22:35.383 "data_size": 63488 00:22:35.383 } 00:22:35.383 ] 00:22:35.383 }' 00:22:35.383 04:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:35.383 04:17:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.950 04:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.950 04:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:36.209 04:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:36.209 04:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:36.468 [2024-07-23 04:17:45.002452] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:36.468 04:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:36.468 04:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:36.468 04:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:36.468 04:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:36.468 04:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:36.468 04:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:36.468 04:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:36.468 04:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:36.468 04:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:36.468 04:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:36.468 04:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.468 04:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:36.468 04:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:36.468 "name": "Existed_Raid", 00:22:36.468 "uuid": "a1615495-ca30-435f-85a1-82c40b3362d2", 00:22:36.468 "strip_size_kb": 64, 00:22:36.468 "state": "configuring", 00:22:36.468 "raid_level": "raid0", 00:22:36.468 "superblock": true, 00:22:36.468 "num_base_bdevs": 4, 00:22:36.468 "num_base_bdevs_discovered": 2, 00:22:36.468 "num_base_bdevs_operational": 4, 00:22:36.468 "base_bdevs_list": [ 00:22:36.468 { 00:22:36.468 "name": "BaseBdev1", 00:22:36.468 "uuid": "725a2481-c472-4352-9736-586fc66c163d", 00:22:36.468 "is_configured": true, 00:22:36.468 "data_offset": 2048, 00:22:36.468 "data_size": 63488 00:22:36.468 }, 00:22:36.468 { 00:22:36.468 "name": null, 00:22:36.468 "uuid": "15bcf6f1-41b8-413e-8139-650bd0193278", 00:22:36.468 "is_configured": false, 00:22:36.468 "data_offset": 2048, 00:22:36.468 "data_size": 63488 00:22:36.468 }, 00:22:36.468 { 00:22:36.468 "name": null, 00:22:36.468 "uuid": "f36c7ed2-64eb-4f56-8c3c-9c8315459315", 00:22:36.468 "is_configured": false, 00:22:36.468 "data_offset": 2048, 00:22:36.468 "data_size": 63488 00:22:36.468 }, 00:22:36.468 { 00:22:36.468 "name": "BaseBdev4", 00:22:36.468 "uuid": "48c9c015-b669-4b38-9cc3-984a6add4ee0", 00:22:36.468 "is_configured": true, 00:22:36.468 "data_offset": 2048, 00:22:36.468 "data_size": 63488 00:22:36.468 } 00:22:36.468 ] 00:22:36.468 }' 00:22:36.468 04:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:36.468 04:17:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.035 04:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.035 04:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:37.294 04:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:37.294 04:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:37.553 [2024-07-23 04:17:46.237789] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:37.553 04:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:37.553 04:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:37.553 04:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:37.553 04:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:37.553 04:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:37.553 04:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:37.553 04:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:37.553 04:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:37.553 04:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:37.553 04:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:37.553 04:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.553 04:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:37.812 04:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:37.812 "name": "Existed_Raid", 00:22:37.812 "uuid": "a1615495-ca30-435f-85a1-82c40b3362d2", 00:22:37.812 "strip_size_kb": 64, 00:22:37.812 "state": "configuring", 00:22:37.812 "raid_level": "raid0", 00:22:37.812 "superblock": true, 00:22:37.812 "num_base_bdevs": 4, 00:22:37.812 "num_base_bdevs_discovered": 3, 00:22:37.812 "num_base_bdevs_operational": 4, 00:22:37.812 "base_bdevs_list": [ 00:22:37.812 { 00:22:37.812 "name": "BaseBdev1", 00:22:37.812 "uuid": "725a2481-c472-4352-9736-586fc66c163d", 00:22:37.812 "is_configured": true, 00:22:37.812 "data_offset": 2048, 00:22:37.812 "data_size": 63488 00:22:37.812 }, 00:22:37.812 { 00:22:37.812 "name": null, 00:22:37.812 "uuid": "15bcf6f1-41b8-413e-8139-650bd0193278", 00:22:37.812 "is_configured": false, 00:22:37.812 "data_offset": 2048, 00:22:37.812 "data_size": 63488 00:22:37.812 }, 00:22:37.812 { 00:22:37.812 "name": "BaseBdev3", 00:22:37.812 "uuid": "f36c7ed2-64eb-4f56-8c3c-9c8315459315", 00:22:37.812 "is_configured": true, 00:22:37.812 "data_offset": 2048, 00:22:37.812 "data_size": 63488 00:22:37.812 }, 00:22:37.812 { 00:22:37.812 "name": "BaseBdev4", 00:22:37.812 "uuid": "48c9c015-b669-4b38-9cc3-984a6add4ee0", 00:22:37.812 "is_configured": true, 00:22:37.812 "data_offset": 2048, 00:22:37.812 "data_size": 63488 00:22:37.812 } 00:22:37.812 ] 00:22:37.812 }' 00:22:37.812 04:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:37.812 04:17:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:38.379 04:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.379 04:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:38.638 04:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:38.638 04:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:38.897 [2024-07-23 04:17:47.513289] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:38.897 04:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:38.897 04:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:38.897 04:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:38.897 04:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:38.897 04:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:38.897 04:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:38.897 04:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:38.897 04:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:38.897 04:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:38.897 04:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:38.897 04:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.897 04:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:39.156 04:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:39.156 "name": "Existed_Raid", 00:22:39.156 "uuid": "a1615495-ca30-435f-85a1-82c40b3362d2", 00:22:39.156 "strip_size_kb": 64, 00:22:39.156 "state": "configuring", 00:22:39.156 "raid_level": "raid0", 00:22:39.156 "superblock": true, 00:22:39.156 "num_base_bdevs": 4, 00:22:39.156 "num_base_bdevs_discovered": 2, 00:22:39.156 "num_base_bdevs_operational": 4, 00:22:39.156 "base_bdevs_list": [ 00:22:39.156 { 00:22:39.156 "name": null, 00:22:39.156 "uuid": "725a2481-c472-4352-9736-586fc66c163d", 00:22:39.156 "is_configured": false, 00:22:39.156 "data_offset": 2048, 00:22:39.156 "data_size": 63488 00:22:39.156 }, 00:22:39.156 { 00:22:39.156 "name": null, 00:22:39.156 "uuid": "15bcf6f1-41b8-413e-8139-650bd0193278", 00:22:39.156 "is_configured": false, 00:22:39.156 "data_offset": 2048, 00:22:39.156 "data_size": 63488 00:22:39.156 }, 00:22:39.156 { 00:22:39.156 "name": "BaseBdev3", 00:22:39.156 "uuid": "f36c7ed2-64eb-4f56-8c3c-9c8315459315", 00:22:39.156 "is_configured": true, 00:22:39.156 "data_offset": 2048, 00:22:39.156 "data_size": 63488 00:22:39.156 }, 00:22:39.156 { 00:22:39.156 "name": "BaseBdev4", 00:22:39.156 "uuid": "48c9c015-b669-4b38-9cc3-984a6add4ee0", 00:22:39.156 "is_configured": true, 00:22:39.156 "data_offset": 2048, 00:22:39.156 "data_size": 63488 00:22:39.156 } 00:22:39.156 ] 00:22:39.156 }' 00:22:39.156 04:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:39.156 04:17:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.723 04:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.723 04:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:39.981 04:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:39.981 04:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:40.240 [2024-07-23 04:17:48.886800] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:40.240 04:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:40.240 04:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:40.240 04:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:40.240 04:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:40.240 04:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:40.240 04:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:40.240 04:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:40.240 04:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:40.240 04:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:40.240 04:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:40.240 04:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.240 04:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:40.498 04:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:40.498 "name": "Existed_Raid", 00:22:40.498 "uuid": "a1615495-ca30-435f-85a1-82c40b3362d2", 00:22:40.498 "strip_size_kb": 64, 00:22:40.498 "state": "configuring", 00:22:40.498 "raid_level": "raid0", 00:22:40.498 "superblock": true, 00:22:40.498 "num_base_bdevs": 4, 00:22:40.498 "num_base_bdevs_discovered": 3, 00:22:40.498 "num_base_bdevs_operational": 4, 00:22:40.498 "base_bdevs_list": [ 00:22:40.498 { 00:22:40.498 "name": null, 00:22:40.498 "uuid": "725a2481-c472-4352-9736-586fc66c163d", 00:22:40.498 "is_configured": false, 00:22:40.498 "data_offset": 2048, 00:22:40.498 "data_size": 63488 00:22:40.498 }, 00:22:40.498 { 00:22:40.498 "name": "BaseBdev2", 00:22:40.498 "uuid": "15bcf6f1-41b8-413e-8139-650bd0193278", 00:22:40.498 "is_configured": true, 00:22:40.498 "data_offset": 2048, 00:22:40.498 "data_size": 63488 00:22:40.498 }, 00:22:40.498 { 00:22:40.498 "name": "BaseBdev3", 00:22:40.498 "uuid": "f36c7ed2-64eb-4f56-8c3c-9c8315459315", 00:22:40.498 "is_configured": true, 00:22:40.498 "data_offset": 2048, 00:22:40.498 "data_size": 63488 00:22:40.498 }, 00:22:40.498 { 00:22:40.498 "name": "BaseBdev4", 00:22:40.498 "uuid": "48c9c015-b669-4b38-9cc3-984a6add4ee0", 00:22:40.498 "is_configured": true, 00:22:40.498 "data_offset": 2048, 00:22:40.498 "data_size": 63488 00:22:40.498 } 00:22:40.498 ] 00:22:40.498 }' 00:22:40.498 04:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:40.498 04:17:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.066 04:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.066 04:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:41.324 04:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:41.324 04:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.324 04:17:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:41.583 04:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 725a2481-c472-4352-9736-586fc66c163d 00:22:41.842 [2024-07-23 04:17:50.399124] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:41.842 [2024-07-23 04:17:50.399397] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:22:41.842 [2024-07-23 04:17:50.399416] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:41.842 [2024-07-23 04:17:50.399739] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:22:41.842 [2024-07-23 04:17:50.399956] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:22:41.842 [2024-07-23 04:17:50.399974] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000042080 00:22:41.842 NewBaseBdev 00:22:41.842 [2024-07-23 04:17:50.400168] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:41.842 04:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:41.842 04:17:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:22:41.842 04:17:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:41.842 04:17:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:41.843 04:17:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:41.843 04:17:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:41.843 04:17:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:42.101 04:17:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:42.101 [ 00:22:42.101 { 00:22:42.101 "name": "NewBaseBdev", 00:22:42.101 "aliases": [ 00:22:42.101 "725a2481-c472-4352-9736-586fc66c163d" 00:22:42.101 ], 00:22:42.101 "product_name": "Malloc disk", 00:22:42.101 "block_size": 512, 00:22:42.101 "num_blocks": 65536, 00:22:42.101 "uuid": "725a2481-c472-4352-9736-586fc66c163d", 00:22:42.101 "assigned_rate_limits": { 00:22:42.101 "rw_ios_per_sec": 0, 00:22:42.101 "rw_mbytes_per_sec": 0, 00:22:42.101 "r_mbytes_per_sec": 0, 00:22:42.101 "w_mbytes_per_sec": 0 00:22:42.101 }, 00:22:42.101 "claimed": true, 00:22:42.101 "claim_type": "exclusive_write", 00:22:42.101 "zoned": false, 00:22:42.101 "supported_io_types": { 00:22:42.101 "read": true, 00:22:42.101 "write": true, 00:22:42.101 "unmap": true, 00:22:42.101 "flush": true, 00:22:42.101 "reset": true, 00:22:42.101 "nvme_admin": false, 00:22:42.101 "nvme_io": false, 00:22:42.101 "nvme_io_md": false, 00:22:42.101 "write_zeroes": true, 00:22:42.101 "zcopy": true, 00:22:42.101 "get_zone_info": false, 00:22:42.101 "zone_management": false, 00:22:42.101 "zone_append": false, 00:22:42.101 "compare": false, 00:22:42.101 "compare_and_write": false, 00:22:42.101 "abort": true, 00:22:42.101 "seek_hole": false, 00:22:42.101 "seek_data": false, 00:22:42.101 "copy": true, 00:22:42.101 "nvme_iov_md": false 00:22:42.101 }, 00:22:42.101 "memory_domains": [ 00:22:42.101 { 00:22:42.101 "dma_device_id": "system", 00:22:42.101 "dma_device_type": 1 00:22:42.101 }, 00:22:42.101 { 00:22:42.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.102 "dma_device_type": 2 00:22:42.102 } 00:22:42.102 ], 00:22:42.102 "driver_specific": {} 00:22:42.102 } 00:22:42.102 ] 00:22:42.102 04:17:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:42.102 04:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:42.102 04:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:42.102 04:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:42.102 04:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:42.102 04:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:42.102 04:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:42.102 04:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:42.102 04:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:42.102 04:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:42.102 04:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:42.102 04:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.102 04:17:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:42.361 04:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:42.361 "name": "Existed_Raid", 00:22:42.361 "uuid": "a1615495-ca30-435f-85a1-82c40b3362d2", 00:22:42.361 "strip_size_kb": 64, 00:22:42.361 "state": "online", 00:22:42.361 "raid_level": "raid0", 00:22:42.361 "superblock": true, 00:22:42.361 "num_base_bdevs": 4, 00:22:42.361 "num_base_bdevs_discovered": 4, 00:22:42.361 "num_base_bdevs_operational": 4, 00:22:42.361 "base_bdevs_list": [ 00:22:42.361 { 00:22:42.361 "name": "NewBaseBdev", 00:22:42.361 "uuid": "725a2481-c472-4352-9736-586fc66c163d", 00:22:42.361 "is_configured": true, 00:22:42.361 "data_offset": 2048, 00:22:42.361 "data_size": 63488 00:22:42.361 }, 00:22:42.361 { 00:22:42.361 "name": "BaseBdev2", 00:22:42.361 "uuid": "15bcf6f1-41b8-413e-8139-650bd0193278", 00:22:42.361 "is_configured": true, 00:22:42.361 "data_offset": 2048, 00:22:42.361 "data_size": 63488 00:22:42.361 }, 00:22:42.361 { 00:22:42.361 "name": "BaseBdev3", 00:22:42.361 "uuid": "f36c7ed2-64eb-4f56-8c3c-9c8315459315", 00:22:42.361 "is_configured": true, 00:22:42.361 "data_offset": 2048, 00:22:42.361 "data_size": 63488 00:22:42.361 }, 00:22:42.361 { 00:22:42.361 "name": "BaseBdev4", 00:22:42.361 "uuid": "48c9c015-b669-4b38-9cc3-984a6add4ee0", 00:22:42.361 "is_configured": true, 00:22:42.361 "data_offset": 2048, 00:22:42.361 "data_size": 63488 00:22:42.361 } 00:22:42.361 ] 00:22:42.361 }' 00:22:42.361 04:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:42.361 04:17:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.928 04:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:42.928 04:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:42.928 04:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:42.928 04:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:42.928 04:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:42.928 04:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:42.928 04:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:42.928 04:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:43.188 [2024-07-23 04:17:51.875909] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:43.188 04:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:43.188 "name": "Existed_Raid", 00:22:43.188 "aliases": [ 00:22:43.188 "a1615495-ca30-435f-85a1-82c40b3362d2" 00:22:43.188 ], 00:22:43.188 "product_name": "Raid Volume", 00:22:43.188 "block_size": 512, 00:22:43.188 "num_blocks": 253952, 00:22:43.188 "uuid": "a1615495-ca30-435f-85a1-82c40b3362d2", 00:22:43.188 "assigned_rate_limits": { 00:22:43.188 "rw_ios_per_sec": 0, 00:22:43.188 "rw_mbytes_per_sec": 0, 00:22:43.188 "r_mbytes_per_sec": 0, 00:22:43.188 "w_mbytes_per_sec": 0 00:22:43.188 }, 00:22:43.188 "claimed": false, 00:22:43.188 "zoned": false, 00:22:43.188 "supported_io_types": { 00:22:43.188 "read": true, 00:22:43.188 "write": true, 00:22:43.188 "unmap": true, 00:22:43.188 "flush": true, 00:22:43.188 "reset": true, 00:22:43.188 "nvme_admin": false, 00:22:43.188 "nvme_io": false, 00:22:43.188 "nvme_io_md": false, 00:22:43.188 "write_zeroes": true, 00:22:43.188 "zcopy": false, 00:22:43.188 "get_zone_info": false, 00:22:43.188 "zone_management": false, 00:22:43.188 "zone_append": false, 00:22:43.188 "compare": false, 00:22:43.188 "compare_and_write": false, 00:22:43.188 "abort": false, 00:22:43.188 "seek_hole": false, 00:22:43.188 "seek_data": false, 00:22:43.188 "copy": false, 00:22:43.188 "nvme_iov_md": false 00:22:43.188 }, 00:22:43.188 "memory_domains": [ 00:22:43.188 { 00:22:43.188 "dma_device_id": "system", 00:22:43.188 "dma_device_type": 1 00:22:43.188 }, 00:22:43.188 { 00:22:43.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.188 "dma_device_type": 2 00:22:43.188 }, 00:22:43.188 { 00:22:43.188 "dma_device_id": "system", 00:22:43.188 "dma_device_type": 1 00:22:43.188 }, 00:22:43.188 { 00:22:43.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.188 "dma_device_type": 2 00:22:43.188 }, 00:22:43.188 { 00:22:43.188 "dma_device_id": "system", 00:22:43.188 "dma_device_type": 1 00:22:43.188 }, 00:22:43.188 { 00:22:43.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.188 "dma_device_type": 2 00:22:43.188 }, 00:22:43.188 { 00:22:43.188 "dma_device_id": "system", 00:22:43.188 "dma_device_type": 1 00:22:43.188 }, 00:22:43.188 { 00:22:43.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.188 "dma_device_type": 2 00:22:43.188 } 00:22:43.188 ], 00:22:43.188 "driver_specific": { 00:22:43.188 "raid": { 00:22:43.188 "uuid": "a1615495-ca30-435f-85a1-82c40b3362d2", 00:22:43.188 "strip_size_kb": 64, 00:22:43.188 "state": "online", 00:22:43.188 "raid_level": "raid0", 00:22:43.188 "superblock": true, 00:22:43.188 "num_base_bdevs": 4, 00:22:43.188 "num_base_bdevs_discovered": 4, 00:22:43.188 "num_base_bdevs_operational": 4, 00:22:43.188 "base_bdevs_list": [ 00:22:43.188 { 00:22:43.188 "name": "NewBaseBdev", 00:22:43.188 "uuid": "725a2481-c472-4352-9736-586fc66c163d", 00:22:43.188 "is_configured": true, 00:22:43.188 "data_offset": 2048, 00:22:43.188 "data_size": 63488 00:22:43.188 }, 00:22:43.188 { 00:22:43.188 "name": "BaseBdev2", 00:22:43.188 "uuid": "15bcf6f1-41b8-413e-8139-650bd0193278", 00:22:43.188 "is_configured": true, 00:22:43.188 "data_offset": 2048, 00:22:43.188 "data_size": 63488 00:22:43.188 }, 00:22:43.188 { 00:22:43.188 "name": "BaseBdev3", 00:22:43.188 "uuid": "f36c7ed2-64eb-4f56-8c3c-9c8315459315", 00:22:43.188 "is_configured": true, 00:22:43.188 "data_offset": 2048, 00:22:43.188 "data_size": 63488 00:22:43.188 }, 00:22:43.188 { 00:22:43.188 "name": "BaseBdev4", 00:22:43.188 "uuid": "48c9c015-b669-4b38-9cc3-984a6add4ee0", 00:22:43.188 "is_configured": true, 00:22:43.188 "data_offset": 2048, 00:22:43.188 "data_size": 63488 00:22:43.188 } 00:22:43.188 ] 00:22:43.188 } 00:22:43.188 } 00:22:43.188 }' 00:22:43.188 04:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:43.188 04:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:43.188 BaseBdev2 00:22:43.188 BaseBdev3 00:22:43.188 BaseBdev4' 00:22:43.188 04:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:43.188 04:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:43.188 04:17:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:43.447 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:43.447 "name": "NewBaseBdev", 00:22:43.447 "aliases": [ 00:22:43.447 "725a2481-c472-4352-9736-586fc66c163d" 00:22:43.447 ], 00:22:43.447 "product_name": "Malloc disk", 00:22:43.447 "block_size": 512, 00:22:43.447 "num_blocks": 65536, 00:22:43.447 "uuid": "725a2481-c472-4352-9736-586fc66c163d", 00:22:43.447 "assigned_rate_limits": { 00:22:43.447 "rw_ios_per_sec": 0, 00:22:43.447 "rw_mbytes_per_sec": 0, 00:22:43.447 "r_mbytes_per_sec": 0, 00:22:43.447 "w_mbytes_per_sec": 0 00:22:43.447 }, 00:22:43.447 "claimed": true, 00:22:43.447 "claim_type": "exclusive_write", 00:22:43.447 "zoned": false, 00:22:43.447 "supported_io_types": { 00:22:43.447 "read": true, 00:22:43.447 "write": true, 00:22:43.447 "unmap": true, 00:22:43.447 "flush": true, 00:22:43.447 "reset": true, 00:22:43.447 "nvme_admin": false, 00:22:43.447 "nvme_io": false, 00:22:43.447 "nvme_io_md": false, 00:22:43.447 "write_zeroes": true, 00:22:43.447 "zcopy": true, 00:22:43.447 "get_zone_info": false, 00:22:43.447 "zone_management": false, 00:22:43.447 "zone_append": false, 00:22:43.448 "compare": false, 00:22:43.448 "compare_and_write": false, 00:22:43.448 "abort": true, 00:22:43.448 "seek_hole": false, 00:22:43.448 "seek_data": false, 00:22:43.448 "copy": true, 00:22:43.448 "nvme_iov_md": false 00:22:43.448 }, 00:22:43.448 "memory_domains": [ 00:22:43.448 { 00:22:43.448 "dma_device_id": "system", 00:22:43.448 "dma_device_type": 1 00:22:43.448 }, 00:22:43.448 { 00:22:43.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.448 "dma_device_type": 2 00:22:43.448 } 00:22:43.448 ], 00:22:43.448 "driver_specific": {} 00:22:43.448 }' 00:22:43.448 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:43.448 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:43.706 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:43.706 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:43.706 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:43.706 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:43.706 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:43.706 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:43.706 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:43.706 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:43.706 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:43.965 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:43.965 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:43.965 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:43.965 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:44.232 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:44.232 "name": "BaseBdev2", 00:22:44.232 "aliases": [ 00:22:44.232 "15bcf6f1-41b8-413e-8139-650bd0193278" 00:22:44.232 ], 00:22:44.232 "product_name": "Malloc disk", 00:22:44.232 "block_size": 512, 00:22:44.232 "num_blocks": 65536, 00:22:44.232 "uuid": "15bcf6f1-41b8-413e-8139-650bd0193278", 00:22:44.232 "assigned_rate_limits": { 00:22:44.232 "rw_ios_per_sec": 0, 00:22:44.232 "rw_mbytes_per_sec": 0, 00:22:44.232 "r_mbytes_per_sec": 0, 00:22:44.232 "w_mbytes_per_sec": 0 00:22:44.232 }, 00:22:44.232 "claimed": true, 00:22:44.232 "claim_type": "exclusive_write", 00:22:44.232 "zoned": false, 00:22:44.232 "supported_io_types": { 00:22:44.232 "read": true, 00:22:44.232 "write": true, 00:22:44.232 "unmap": true, 00:22:44.232 "flush": true, 00:22:44.232 "reset": true, 00:22:44.232 "nvme_admin": false, 00:22:44.232 "nvme_io": false, 00:22:44.232 "nvme_io_md": false, 00:22:44.232 "write_zeroes": true, 00:22:44.232 "zcopy": true, 00:22:44.232 "get_zone_info": false, 00:22:44.232 "zone_management": false, 00:22:44.232 "zone_append": false, 00:22:44.232 "compare": false, 00:22:44.232 "compare_and_write": false, 00:22:44.232 "abort": true, 00:22:44.232 "seek_hole": false, 00:22:44.232 "seek_data": false, 00:22:44.232 "copy": true, 00:22:44.232 "nvme_iov_md": false 00:22:44.232 }, 00:22:44.232 "memory_domains": [ 00:22:44.232 { 00:22:44.232 "dma_device_id": "system", 00:22:44.232 "dma_device_type": 1 00:22:44.232 }, 00:22:44.232 { 00:22:44.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.232 "dma_device_type": 2 00:22:44.232 } 00:22:44.232 ], 00:22:44.232 "driver_specific": {} 00:22:44.232 }' 00:22:44.232 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:44.232 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:44.232 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:44.232 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:44.232 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:44.232 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:44.232 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:44.232 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:44.232 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:44.232 04:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:44.495 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:44.495 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:44.495 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:44.495 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:44.495 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:44.752 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:44.753 "name": "BaseBdev3", 00:22:44.753 "aliases": [ 00:22:44.753 "f36c7ed2-64eb-4f56-8c3c-9c8315459315" 00:22:44.753 ], 00:22:44.753 "product_name": "Malloc disk", 00:22:44.753 "block_size": 512, 00:22:44.753 "num_blocks": 65536, 00:22:44.753 "uuid": "f36c7ed2-64eb-4f56-8c3c-9c8315459315", 00:22:44.753 "assigned_rate_limits": { 00:22:44.753 "rw_ios_per_sec": 0, 00:22:44.753 "rw_mbytes_per_sec": 0, 00:22:44.753 "r_mbytes_per_sec": 0, 00:22:44.753 "w_mbytes_per_sec": 0 00:22:44.753 }, 00:22:44.753 "claimed": true, 00:22:44.753 "claim_type": "exclusive_write", 00:22:44.753 "zoned": false, 00:22:44.753 "supported_io_types": { 00:22:44.753 "read": true, 00:22:44.753 "write": true, 00:22:44.753 "unmap": true, 00:22:44.753 "flush": true, 00:22:44.753 "reset": true, 00:22:44.753 "nvme_admin": false, 00:22:44.753 "nvme_io": false, 00:22:44.753 "nvme_io_md": false, 00:22:44.753 "write_zeroes": true, 00:22:44.753 "zcopy": true, 00:22:44.753 "get_zone_info": false, 00:22:44.753 "zone_management": false, 00:22:44.753 "zone_append": false, 00:22:44.753 "compare": false, 00:22:44.753 "compare_and_write": false, 00:22:44.753 "abort": true, 00:22:44.753 "seek_hole": false, 00:22:44.753 "seek_data": false, 00:22:44.753 "copy": true, 00:22:44.753 "nvme_iov_md": false 00:22:44.753 }, 00:22:44.753 "memory_domains": [ 00:22:44.753 { 00:22:44.753 "dma_device_id": "system", 00:22:44.753 "dma_device_type": 1 00:22:44.753 }, 00:22:44.753 { 00:22:44.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.753 "dma_device_type": 2 00:22:44.753 } 00:22:44.753 ], 00:22:44.753 "driver_specific": {} 00:22:44.753 }' 00:22:44.753 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:44.753 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:44.753 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:44.753 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:44.753 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:44.753 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:44.753 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:44.753 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:45.011 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:45.011 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:45.011 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:45.011 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:45.011 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:45.011 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:45.011 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:45.269 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:45.269 "name": "BaseBdev4", 00:22:45.269 "aliases": [ 00:22:45.269 "48c9c015-b669-4b38-9cc3-984a6add4ee0" 00:22:45.269 ], 00:22:45.269 "product_name": "Malloc disk", 00:22:45.269 "block_size": 512, 00:22:45.269 "num_blocks": 65536, 00:22:45.269 "uuid": "48c9c015-b669-4b38-9cc3-984a6add4ee0", 00:22:45.269 "assigned_rate_limits": { 00:22:45.269 "rw_ios_per_sec": 0, 00:22:45.269 "rw_mbytes_per_sec": 0, 00:22:45.269 "r_mbytes_per_sec": 0, 00:22:45.269 "w_mbytes_per_sec": 0 00:22:45.269 }, 00:22:45.269 "claimed": true, 00:22:45.269 "claim_type": "exclusive_write", 00:22:45.269 "zoned": false, 00:22:45.269 "supported_io_types": { 00:22:45.269 "read": true, 00:22:45.269 "write": true, 00:22:45.269 "unmap": true, 00:22:45.269 "flush": true, 00:22:45.269 "reset": true, 00:22:45.269 "nvme_admin": false, 00:22:45.269 "nvme_io": false, 00:22:45.269 "nvme_io_md": false, 00:22:45.269 "write_zeroes": true, 00:22:45.269 "zcopy": true, 00:22:45.269 "get_zone_info": false, 00:22:45.269 "zone_management": false, 00:22:45.269 "zone_append": false, 00:22:45.269 "compare": false, 00:22:45.269 "compare_and_write": false, 00:22:45.269 "abort": true, 00:22:45.269 "seek_hole": false, 00:22:45.269 "seek_data": false, 00:22:45.269 "copy": true, 00:22:45.269 "nvme_iov_md": false 00:22:45.269 }, 00:22:45.269 "memory_domains": [ 00:22:45.269 { 00:22:45.269 "dma_device_id": "system", 00:22:45.269 "dma_device_type": 1 00:22:45.269 }, 00:22:45.269 { 00:22:45.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:45.269 "dma_device_type": 2 00:22:45.269 } 00:22:45.269 ], 00:22:45.269 "driver_specific": {} 00:22:45.269 }' 00:22:45.269 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:45.269 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:45.269 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:45.269 04:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:45.269 04:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:45.269 04:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:45.269 04:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:45.528 04:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:45.528 04:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:45.528 04:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:45.528 04:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:45.528 04:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:45.528 04:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:45.787 [2024-07-23 04:17:54.410327] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:45.787 [2024-07-23 04:17:54.410362] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:45.787 [2024-07-23 04:17:54.410444] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:45.787 [2024-07-23 04:17:54.410526] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:45.787 [2024-07-23 04:17:54.410544] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name Existed_Raid, state offline 00:22:45.787 04:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2704747 00:22:45.787 04:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2704747 ']' 00:22:45.787 04:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2704747 00:22:45.787 04:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:22:45.787 04:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:45.787 04:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2704747 00:22:45.787 04:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:45.787 04:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:45.787 04:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2704747' 00:22:45.787 killing process with pid 2704747 00:22:45.787 04:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2704747 00:22:45.787 [2024-07-23 04:17:54.489402] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:45.787 04:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2704747 00:22:46.356 [2024-07-23 04:17:54.936534] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:48.260 04:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:22:48.261 00:22:48.261 real 0m34.139s 00:22:48.261 user 1m0.061s 00:22:48.261 sys 0m5.684s 00:22:48.261 04:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:48.261 04:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.261 ************************************ 00:22:48.261 END TEST raid_state_function_test_sb 00:22:48.261 ************************************ 00:22:48.261 04:17:56 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:48.261 04:17:56 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:22:48.261 04:17:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:48.261 04:17:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:48.261 04:17:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:48.261 ************************************ 00:22:48.261 START TEST raid_superblock_test 00:22:48.261 ************************************ 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 4 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2711212 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2711212 /var/tmp/spdk-raid.sock 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2711212 ']' 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:48.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:48.261 04:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.261 [2024-07-23 04:17:56.799382] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:48.261 [2024-07-23 04:17:56.799499] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2711212 ] 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:48.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:48.261 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:48.261 [2024-07-23 04:17:57.025374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.829 [2024-07-23 04:17:57.316568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.088 [2024-07-23 04:17:57.660394] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:49.088 [2024-07-23 04:17:57.660423] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:49.088 04:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:49.088 04:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:22:49.088 04:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:22:49.088 04:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:49.088 04:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:22:49.088 04:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:22:49.088 04:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:49.088 04:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:49.088 04:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:49.088 04:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:49.088 04:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:49.347 malloc1 00:22:49.606 04:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:49.606 [2024-07-23 04:17:58.351889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:49.606 [2024-07-23 04:17:58.351949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:49.606 [2024-07-23 04:17:58.351980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:22:49.606 [2024-07-23 04:17:58.351996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:49.606 [2024-07-23 04:17:58.354747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:49.606 [2024-07-23 04:17:58.354782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:49.606 pt1 00:22:49.606 04:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:49.606 04:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:49.606 04:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:22:49.606 04:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:22:49.606 04:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:49.606 04:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:49.606 04:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:49.606 04:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:49.606 04:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:49.866 malloc2 00:22:49.866 04:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:50.125 [2024-07-23 04:17:58.802271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:50.125 [2024-07-23 04:17:58.802323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:50.125 [2024-07-23 04:17:58.802350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:22:50.125 [2024-07-23 04:17:58.802365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:50.125 [2024-07-23 04:17:58.805063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:50.125 [2024-07-23 04:17:58.805100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:50.125 pt2 00:22:50.125 04:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:50.125 04:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:50.125 04:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:22:50.125 04:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:22:50.125 04:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:50.125 04:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:50.125 04:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:50.125 04:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:50.125 04:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:50.384 malloc3 00:22:50.384 04:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:50.644 [2024-07-23 04:17:59.315296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:50.644 [2024-07-23 04:17:59.315358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:50.644 [2024-07-23 04:17:59.315387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:22:50.644 [2024-07-23 04:17:59.315403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:50.644 [2024-07-23 04:17:59.318172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:50.644 [2024-07-23 04:17:59.318205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:50.644 pt3 00:22:50.644 04:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:50.644 04:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:50.644 04:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:22:50.644 04:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:22:50.644 04:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:50.644 04:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:50.644 04:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:50.644 04:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:50.644 04:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:22:50.903 malloc4 00:22:50.903 04:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:51.162 [2024-07-23 04:17:59.820858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:51.162 [2024-07-23 04:17:59.820925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:51.162 [2024-07-23 04:17:59.820954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:22:51.162 [2024-07-23 04:17:59.820969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:51.162 [2024-07-23 04:17:59.823723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:51.162 [2024-07-23 04:17:59.823756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:51.162 pt4 00:22:51.162 04:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:51.162 04:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:51.162 04:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:22:51.420 [2024-07-23 04:18:00.049580] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:51.420 [2024-07-23 04:18:00.051934] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:51.420 [2024-07-23 04:18:00.052020] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:51.420 [2024-07-23 04:18:00.052077] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:51.420 [2024-07-23 04:18:00.052344] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:22:51.420 [2024-07-23 04:18:00.052362] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:51.420 [2024-07-23 04:18:00.052722] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:22:51.420 [2024-07-23 04:18:00.052958] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:22:51.420 [2024-07-23 04:18:00.052976] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042080 00:22:51.420 [2024-07-23 04:18:00.053175] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:51.420 04:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:51.420 04:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:51.420 04:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:51.420 04:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:51.420 04:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:51.420 04:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:51.420 04:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:51.421 04:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:51.421 04:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:51.421 04:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:51.421 04:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.421 04:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.679 04:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:51.679 "name": "raid_bdev1", 00:22:51.679 "uuid": "878535d3-e75a-4d5f-8f58-dd07618973af", 00:22:51.679 "strip_size_kb": 64, 00:22:51.679 "state": "online", 00:22:51.679 "raid_level": "raid0", 00:22:51.679 "superblock": true, 00:22:51.679 "num_base_bdevs": 4, 00:22:51.679 "num_base_bdevs_discovered": 4, 00:22:51.679 "num_base_bdevs_operational": 4, 00:22:51.679 "base_bdevs_list": [ 00:22:51.679 { 00:22:51.679 "name": "pt1", 00:22:51.679 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:51.679 "is_configured": true, 00:22:51.679 "data_offset": 2048, 00:22:51.679 "data_size": 63488 00:22:51.679 }, 00:22:51.679 { 00:22:51.679 "name": "pt2", 00:22:51.679 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:51.679 "is_configured": true, 00:22:51.679 "data_offset": 2048, 00:22:51.679 "data_size": 63488 00:22:51.679 }, 00:22:51.680 { 00:22:51.680 "name": "pt3", 00:22:51.680 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:51.680 "is_configured": true, 00:22:51.680 "data_offset": 2048, 00:22:51.680 "data_size": 63488 00:22:51.680 }, 00:22:51.680 { 00:22:51.680 "name": "pt4", 00:22:51.680 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:51.680 "is_configured": true, 00:22:51.680 "data_offset": 2048, 00:22:51.680 "data_size": 63488 00:22:51.680 } 00:22:51.680 ] 00:22:51.680 }' 00:22:51.680 04:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:51.680 04:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.246 04:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:22:52.246 04:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:52.246 04:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:52.246 04:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:52.246 04:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:52.246 04:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:52.246 04:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:52.246 04:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:52.505 [2024-07-23 04:18:01.080702] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:52.505 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:52.505 "name": "raid_bdev1", 00:22:52.505 "aliases": [ 00:22:52.505 "878535d3-e75a-4d5f-8f58-dd07618973af" 00:22:52.505 ], 00:22:52.505 "product_name": "Raid Volume", 00:22:52.505 "block_size": 512, 00:22:52.505 "num_blocks": 253952, 00:22:52.505 "uuid": "878535d3-e75a-4d5f-8f58-dd07618973af", 00:22:52.505 "assigned_rate_limits": { 00:22:52.505 "rw_ios_per_sec": 0, 00:22:52.505 "rw_mbytes_per_sec": 0, 00:22:52.505 "r_mbytes_per_sec": 0, 00:22:52.505 "w_mbytes_per_sec": 0 00:22:52.505 }, 00:22:52.505 "claimed": false, 00:22:52.505 "zoned": false, 00:22:52.505 "supported_io_types": { 00:22:52.505 "read": true, 00:22:52.505 "write": true, 00:22:52.505 "unmap": true, 00:22:52.505 "flush": true, 00:22:52.505 "reset": true, 00:22:52.505 "nvme_admin": false, 00:22:52.505 "nvme_io": false, 00:22:52.505 "nvme_io_md": false, 00:22:52.505 "write_zeroes": true, 00:22:52.505 "zcopy": false, 00:22:52.505 "get_zone_info": false, 00:22:52.505 "zone_management": false, 00:22:52.505 "zone_append": false, 00:22:52.505 "compare": false, 00:22:52.505 "compare_and_write": false, 00:22:52.505 "abort": false, 00:22:52.505 "seek_hole": false, 00:22:52.505 "seek_data": false, 00:22:52.505 "copy": false, 00:22:52.505 "nvme_iov_md": false 00:22:52.505 }, 00:22:52.505 "memory_domains": [ 00:22:52.505 { 00:22:52.505 "dma_device_id": "system", 00:22:52.505 "dma_device_type": 1 00:22:52.505 }, 00:22:52.505 { 00:22:52.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.505 "dma_device_type": 2 00:22:52.505 }, 00:22:52.505 { 00:22:52.505 "dma_device_id": "system", 00:22:52.505 "dma_device_type": 1 00:22:52.505 }, 00:22:52.505 { 00:22:52.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.505 "dma_device_type": 2 00:22:52.505 }, 00:22:52.505 { 00:22:52.505 "dma_device_id": "system", 00:22:52.505 "dma_device_type": 1 00:22:52.505 }, 00:22:52.505 { 00:22:52.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.505 "dma_device_type": 2 00:22:52.505 }, 00:22:52.505 { 00:22:52.505 "dma_device_id": "system", 00:22:52.505 "dma_device_type": 1 00:22:52.505 }, 00:22:52.505 { 00:22:52.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.505 "dma_device_type": 2 00:22:52.505 } 00:22:52.505 ], 00:22:52.505 "driver_specific": { 00:22:52.505 "raid": { 00:22:52.505 "uuid": "878535d3-e75a-4d5f-8f58-dd07618973af", 00:22:52.505 "strip_size_kb": 64, 00:22:52.505 "state": "online", 00:22:52.505 "raid_level": "raid0", 00:22:52.505 "superblock": true, 00:22:52.505 "num_base_bdevs": 4, 00:22:52.505 "num_base_bdevs_discovered": 4, 00:22:52.505 "num_base_bdevs_operational": 4, 00:22:52.505 "base_bdevs_list": [ 00:22:52.505 { 00:22:52.505 "name": "pt1", 00:22:52.505 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:52.505 "is_configured": true, 00:22:52.505 "data_offset": 2048, 00:22:52.505 "data_size": 63488 00:22:52.505 }, 00:22:52.505 { 00:22:52.505 "name": "pt2", 00:22:52.505 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:52.505 "is_configured": true, 00:22:52.505 "data_offset": 2048, 00:22:52.505 "data_size": 63488 00:22:52.505 }, 00:22:52.505 { 00:22:52.505 "name": "pt3", 00:22:52.505 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:52.505 "is_configured": true, 00:22:52.505 "data_offset": 2048, 00:22:52.505 "data_size": 63488 00:22:52.505 }, 00:22:52.505 { 00:22:52.505 "name": "pt4", 00:22:52.506 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:52.506 "is_configured": true, 00:22:52.506 "data_offset": 2048, 00:22:52.506 "data_size": 63488 00:22:52.506 } 00:22:52.506 ] 00:22:52.506 } 00:22:52.506 } 00:22:52.506 }' 00:22:52.506 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:52.506 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:52.506 pt2 00:22:52.506 pt3 00:22:52.506 pt4' 00:22:52.506 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:52.506 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:52.506 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:52.764 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:52.764 "name": "pt1", 00:22:52.764 "aliases": [ 00:22:52.764 "00000000-0000-0000-0000-000000000001" 00:22:52.764 ], 00:22:52.764 "product_name": "passthru", 00:22:52.764 "block_size": 512, 00:22:52.764 "num_blocks": 65536, 00:22:52.764 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:52.764 "assigned_rate_limits": { 00:22:52.764 "rw_ios_per_sec": 0, 00:22:52.764 "rw_mbytes_per_sec": 0, 00:22:52.764 "r_mbytes_per_sec": 0, 00:22:52.764 "w_mbytes_per_sec": 0 00:22:52.764 }, 00:22:52.764 "claimed": true, 00:22:52.764 "claim_type": "exclusive_write", 00:22:52.764 "zoned": false, 00:22:52.764 "supported_io_types": { 00:22:52.764 "read": true, 00:22:52.764 "write": true, 00:22:52.764 "unmap": true, 00:22:52.764 "flush": true, 00:22:52.764 "reset": true, 00:22:52.764 "nvme_admin": false, 00:22:52.764 "nvme_io": false, 00:22:52.764 "nvme_io_md": false, 00:22:52.764 "write_zeroes": true, 00:22:52.764 "zcopy": true, 00:22:52.764 "get_zone_info": false, 00:22:52.764 "zone_management": false, 00:22:52.764 "zone_append": false, 00:22:52.764 "compare": false, 00:22:52.764 "compare_and_write": false, 00:22:52.764 "abort": true, 00:22:52.764 "seek_hole": false, 00:22:52.764 "seek_data": false, 00:22:52.764 "copy": true, 00:22:52.764 "nvme_iov_md": false 00:22:52.764 }, 00:22:52.764 "memory_domains": [ 00:22:52.764 { 00:22:52.764 "dma_device_id": "system", 00:22:52.764 "dma_device_type": 1 00:22:52.764 }, 00:22:52.764 { 00:22:52.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.764 "dma_device_type": 2 00:22:52.764 } 00:22:52.764 ], 00:22:52.764 "driver_specific": { 00:22:52.764 "passthru": { 00:22:52.764 "name": "pt1", 00:22:52.764 "base_bdev_name": "malloc1" 00:22:52.764 } 00:22:52.764 } 00:22:52.764 }' 00:22:52.765 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:52.765 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:52.765 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:52.765 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:52.765 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:52.765 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:53.022 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:53.022 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:53.022 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:53.022 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:53.022 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:53.022 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:53.022 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:53.022 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:53.022 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:53.281 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:53.281 "name": "pt2", 00:22:53.281 "aliases": [ 00:22:53.281 "00000000-0000-0000-0000-000000000002" 00:22:53.281 ], 00:22:53.281 "product_name": "passthru", 00:22:53.281 "block_size": 512, 00:22:53.281 "num_blocks": 65536, 00:22:53.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:53.281 "assigned_rate_limits": { 00:22:53.281 "rw_ios_per_sec": 0, 00:22:53.281 "rw_mbytes_per_sec": 0, 00:22:53.281 "r_mbytes_per_sec": 0, 00:22:53.281 "w_mbytes_per_sec": 0 00:22:53.281 }, 00:22:53.281 "claimed": true, 00:22:53.281 "claim_type": "exclusive_write", 00:22:53.281 "zoned": false, 00:22:53.281 "supported_io_types": { 00:22:53.281 "read": true, 00:22:53.281 "write": true, 00:22:53.281 "unmap": true, 00:22:53.281 "flush": true, 00:22:53.281 "reset": true, 00:22:53.281 "nvme_admin": false, 00:22:53.281 "nvme_io": false, 00:22:53.281 "nvme_io_md": false, 00:22:53.281 "write_zeroes": true, 00:22:53.281 "zcopy": true, 00:22:53.281 "get_zone_info": false, 00:22:53.281 "zone_management": false, 00:22:53.281 "zone_append": false, 00:22:53.281 "compare": false, 00:22:53.281 "compare_and_write": false, 00:22:53.281 "abort": true, 00:22:53.281 "seek_hole": false, 00:22:53.281 "seek_data": false, 00:22:53.281 "copy": true, 00:22:53.281 "nvme_iov_md": false 00:22:53.281 }, 00:22:53.281 "memory_domains": [ 00:22:53.281 { 00:22:53.281 "dma_device_id": "system", 00:22:53.281 "dma_device_type": 1 00:22:53.281 }, 00:22:53.281 { 00:22:53.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.281 "dma_device_type": 2 00:22:53.281 } 00:22:53.281 ], 00:22:53.281 "driver_specific": { 00:22:53.281 "passthru": { 00:22:53.281 "name": "pt2", 00:22:53.281 "base_bdev_name": "malloc2" 00:22:53.281 } 00:22:53.281 } 00:22:53.281 }' 00:22:53.281 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:53.281 04:18:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:53.281 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:53.281 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:53.540 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:53.540 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:53.540 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:53.540 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:53.540 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:53.540 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:53.540 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:53.540 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:53.540 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:53.540 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:53.540 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:53.798 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:53.798 "name": "pt3", 00:22:53.798 "aliases": [ 00:22:53.799 "00000000-0000-0000-0000-000000000003" 00:22:53.799 ], 00:22:53.799 "product_name": "passthru", 00:22:53.799 "block_size": 512, 00:22:53.799 "num_blocks": 65536, 00:22:53.799 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:53.799 "assigned_rate_limits": { 00:22:53.799 "rw_ios_per_sec": 0, 00:22:53.799 "rw_mbytes_per_sec": 0, 00:22:53.799 "r_mbytes_per_sec": 0, 00:22:53.799 "w_mbytes_per_sec": 0 00:22:53.799 }, 00:22:53.799 "claimed": true, 00:22:53.799 "claim_type": "exclusive_write", 00:22:53.799 "zoned": false, 00:22:53.799 "supported_io_types": { 00:22:53.799 "read": true, 00:22:53.799 "write": true, 00:22:53.799 "unmap": true, 00:22:53.799 "flush": true, 00:22:53.799 "reset": true, 00:22:53.799 "nvme_admin": false, 00:22:53.799 "nvme_io": false, 00:22:53.799 "nvme_io_md": false, 00:22:53.799 "write_zeroes": true, 00:22:53.799 "zcopy": true, 00:22:53.799 "get_zone_info": false, 00:22:53.799 "zone_management": false, 00:22:53.799 "zone_append": false, 00:22:53.799 "compare": false, 00:22:53.799 "compare_and_write": false, 00:22:53.799 "abort": true, 00:22:53.799 "seek_hole": false, 00:22:53.799 "seek_data": false, 00:22:53.799 "copy": true, 00:22:53.799 "nvme_iov_md": false 00:22:53.799 }, 00:22:53.799 "memory_domains": [ 00:22:53.799 { 00:22:53.799 "dma_device_id": "system", 00:22:53.799 "dma_device_type": 1 00:22:53.799 }, 00:22:53.799 { 00:22:53.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.799 "dma_device_type": 2 00:22:53.799 } 00:22:53.799 ], 00:22:53.799 "driver_specific": { 00:22:53.799 "passthru": { 00:22:53.799 "name": "pt3", 00:22:53.799 "base_bdev_name": "malloc3" 00:22:53.799 } 00:22:53.799 } 00:22:53.799 }' 00:22:53.799 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:53.799 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.057 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:54.057 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.057 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.057 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:54.057 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.057 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.057 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:54.057 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:54.057 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:54.316 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:54.316 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:54.316 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:22:54.316 04:18:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:54.316 04:18:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:54.316 "name": "pt4", 00:22:54.316 "aliases": [ 00:22:54.316 "00000000-0000-0000-0000-000000000004" 00:22:54.316 ], 00:22:54.316 "product_name": "passthru", 00:22:54.316 "block_size": 512, 00:22:54.316 "num_blocks": 65536, 00:22:54.316 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:54.316 "assigned_rate_limits": { 00:22:54.316 "rw_ios_per_sec": 0, 00:22:54.316 "rw_mbytes_per_sec": 0, 00:22:54.316 "r_mbytes_per_sec": 0, 00:22:54.316 "w_mbytes_per_sec": 0 00:22:54.316 }, 00:22:54.316 "claimed": true, 00:22:54.316 "claim_type": "exclusive_write", 00:22:54.316 "zoned": false, 00:22:54.316 "supported_io_types": { 00:22:54.316 "read": true, 00:22:54.316 "write": true, 00:22:54.316 "unmap": true, 00:22:54.316 "flush": true, 00:22:54.316 "reset": true, 00:22:54.316 "nvme_admin": false, 00:22:54.316 "nvme_io": false, 00:22:54.316 "nvme_io_md": false, 00:22:54.316 "write_zeroes": true, 00:22:54.316 "zcopy": true, 00:22:54.316 "get_zone_info": false, 00:22:54.316 "zone_management": false, 00:22:54.316 "zone_append": false, 00:22:54.316 "compare": false, 00:22:54.316 "compare_and_write": false, 00:22:54.316 "abort": true, 00:22:54.316 "seek_hole": false, 00:22:54.316 "seek_data": false, 00:22:54.316 "copy": true, 00:22:54.316 "nvme_iov_md": false 00:22:54.316 }, 00:22:54.316 "memory_domains": [ 00:22:54.316 { 00:22:54.316 "dma_device_id": "system", 00:22:54.316 "dma_device_type": 1 00:22:54.316 }, 00:22:54.316 { 00:22:54.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.316 "dma_device_type": 2 00:22:54.316 } 00:22:54.316 ], 00:22:54.316 "driver_specific": { 00:22:54.316 "passthru": { 00:22:54.316 "name": "pt4", 00:22:54.316 "base_bdev_name": "malloc4" 00:22:54.316 } 00:22:54.316 } 00:22:54.316 }' 00:22:54.316 04:18:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.574 04:18:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.574 04:18:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:54.574 04:18:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.574 04:18:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.574 04:18:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:54.574 04:18:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.574 04:18:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.574 04:18:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:54.574 04:18:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:54.574 04:18:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:54.832 04:18:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:54.832 04:18:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:54.832 04:18:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:22:54.832 [2024-07-23 04:18:03.563408] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:54.832 04:18:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=878535d3-e75a-4d5f-8f58-dd07618973af 00:22:54.832 04:18:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 878535d3-e75a-4d5f-8f58-dd07618973af ']' 00:22:54.832 04:18:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:55.091 [2024-07-23 04:18:03.791595] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:55.091 [2024-07-23 04:18:03.791626] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:55.091 [2024-07-23 04:18:03.791718] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:55.091 [2024-07-23 04:18:03.791799] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:55.091 [2024-07-23 04:18:03.791819] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name raid_bdev1, state offline 00:22:55.091 04:18:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.091 04:18:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:22:55.350 04:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:22:55.350 04:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:22:55.350 04:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:55.350 04:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:55.609 04:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:55.609 04:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:55.868 04:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:55.868 04:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:56.125 04:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:56.125 04:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:56.384 04:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:56.384 04:18:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:56.664 04:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:22:56.664 04:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:56.664 04:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:22:56.664 04:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:56.664 04:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:56.664 04:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:56.665 04:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:56.665 04:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:56.665 04:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:56.665 04:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:56.665 04:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:56.665 04:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:22:56.665 04:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:56.665 [2024-07-23 04:18:05.391838] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:56.665 [2024-07-23 04:18:05.394170] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:56.665 [2024-07-23 04:18:05.394225] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:56.665 [2024-07-23 04:18:05.394270] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:56.665 [2024-07-23 04:18:05.394326] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:56.665 [2024-07-23 04:18:05.394379] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:56.665 [2024-07-23 04:18:05.394407] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:56.665 [2024-07-23 04:18:05.394437] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:22:56.665 [2024-07-23 04:18:05.394459] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:56.665 [2024-07-23 04:18:05.394479] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state configuring 00:22:56.665 request: 00:22:56.665 { 00:22:56.665 "name": "raid_bdev1", 00:22:56.665 "raid_level": "raid0", 00:22:56.665 "base_bdevs": [ 00:22:56.665 "malloc1", 00:22:56.665 "malloc2", 00:22:56.665 "malloc3", 00:22:56.665 "malloc4" 00:22:56.665 ], 00:22:56.665 "strip_size_kb": 64, 00:22:56.665 "superblock": false, 00:22:56.665 "method": "bdev_raid_create", 00:22:56.665 "req_id": 1 00:22:56.665 } 00:22:56.665 Got JSON-RPC error response 00:22:56.665 response: 00:22:56.665 { 00:22:56.665 "code": -17, 00:22:56.665 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:56.665 } 00:22:56.665 04:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:22:56.665 04:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:56.665 04:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:56.665 04:18:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:56.665 04:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.665 04:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:22:56.923 04:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:22:56.923 04:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:22:56.923 04:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:57.182 [2024-07-23 04:18:05.853002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:57.182 [2024-07-23 04:18:05.853071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:57.182 [2024-07-23 04:18:05.853094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:22:57.182 [2024-07-23 04:18:05.853112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:57.182 [2024-07-23 04:18:05.855873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:57.182 [2024-07-23 04:18:05.855910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:57.182 [2024-07-23 04:18:05.856002] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:57.182 [2024-07-23 04:18:05.856063] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:57.182 pt1 00:22:57.182 04:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:22:57.182 04:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:57.182 04:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:57.182 04:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:57.182 04:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:57.182 04:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:57.182 04:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:57.182 04:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:57.182 04:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:57.182 04:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:57.182 04:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.182 04:18:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.441 04:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:57.441 "name": "raid_bdev1", 00:22:57.441 "uuid": "878535d3-e75a-4d5f-8f58-dd07618973af", 00:22:57.441 "strip_size_kb": 64, 00:22:57.441 "state": "configuring", 00:22:57.441 "raid_level": "raid0", 00:22:57.441 "superblock": true, 00:22:57.441 "num_base_bdevs": 4, 00:22:57.441 "num_base_bdevs_discovered": 1, 00:22:57.441 "num_base_bdevs_operational": 4, 00:22:57.441 "base_bdevs_list": [ 00:22:57.441 { 00:22:57.441 "name": "pt1", 00:22:57.441 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:57.441 "is_configured": true, 00:22:57.441 "data_offset": 2048, 00:22:57.441 "data_size": 63488 00:22:57.441 }, 00:22:57.441 { 00:22:57.441 "name": null, 00:22:57.441 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:57.441 "is_configured": false, 00:22:57.441 "data_offset": 2048, 00:22:57.441 "data_size": 63488 00:22:57.441 }, 00:22:57.441 { 00:22:57.441 "name": null, 00:22:57.441 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:57.441 "is_configured": false, 00:22:57.441 "data_offset": 2048, 00:22:57.441 "data_size": 63488 00:22:57.441 }, 00:22:57.441 { 00:22:57.441 "name": null, 00:22:57.441 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:57.441 "is_configured": false, 00:22:57.441 "data_offset": 2048, 00:22:57.441 "data_size": 63488 00:22:57.441 } 00:22:57.441 ] 00:22:57.441 }' 00:22:57.441 04:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:57.441 04:18:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.009 04:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:22:58.009 04:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:58.269 [2024-07-23 04:18:06.847730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:58.269 [2024-07-23 04:18:06.847799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.269 [2024-07-23 04:18:06.847825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043580 00:22:58.269 [2024-07-23 04:18:06.847843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.269 [2024-07-23 04:18:06.848387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.269 [2024-07-23 04:18:06.848415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:58.269 [2024-07-23 04:18:06.848508] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:58.269 [2024-07-23 04:18:06.848540] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:58.269 pt2 00:22:58.269 04:18:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:58.528 [2024-07-23 04:18:07.072372] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:58.528 04:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:22:58.528 04:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:58.528 04:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:58.528 04:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:58.528 04:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:58.528 04:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:58.528 04:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:58.528 04:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:58.528 04:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:58.528 04:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:58.528 04:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.528 04:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.788 04:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:58.788 "name": "raid_bdev1", 00:22:58.788 "uuid": "878535d3-e75a-4d5f-8f58-dd07618973af", 00:22:58.788 "strip_size_kb": 64, 00:22:58.788 "state": "configuring", 00:22:58.788 "raid_level": "raid0", 00:22:58.788 "superblock": true, 00:22:58.788 "num_base_bdevs": 4, 00:22:58.788 "num_base_bdevs_discovered": 1, 00:22:58.788 "num_base_bdevs_operational": 4, 00:22:58.788 "base_bdevs_list": [ 00:22:58.788 { 00:22:58.788 "name": "pt1", 00:22:58.788 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:58.788 "is_configured": true, 00:22:58.788 "data_offset": 2048, 00:22:58.788 "data_size": 63488 00:22:58.788 }, 00:22:58.788 { 00:22:58.788 "name": null, 00:22:58.788 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:58.788 "is_configured": false, 00:22:58.788 "data_offset": 2048, 00:22:58.788 "data_size": 63488 00:22:58.788 }, 00:22:58.788 { 00:22:58.788 "name": null, 00:22:58.788 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:58.788 "is_configured": false, 00:22:58.788 "data_offset": 2048, 00:22:58.788 "data_size": 63488 00:22:58.788 }, 00:22:58.788 { 00:22:58.788 "name": null, 00:22:58.788 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:58.788 "is_configured": false, 00:22:58.788 "data_offset": 2048, 00:22:58.788 "data_size": 63488 00:22:58.788 } 00:22:58.788 ] 00:22:58.788 }' 00:22:58.788 04:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:58.788 04:18:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.356 04:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:22:59.356 04:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:59.356 04:18:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:59.356 [2024-07-23 04:18:08.058966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:59.356 [2024-07-23 04:18:08.059029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.356 [2024-07-23 04:18:08.059056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043880 00:22:59.356 [2024-07-23 04:18:08.059072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.356 [2024-07-23 04:18:08.059664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.356 [2024-07-23 04:18:08.059691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:59.356 [2024-07-23 04:18:08.059788] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:59.356 [2024-07-23 04:18:08.059816] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:59.356 pt2 00:22:59.356 04:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:59.356 04:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:59.356 04:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:59.615 [2024-07-23 04:18:08.283588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:59.615 [2024-07-23 04:18:08.283636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.615 [2024-07-23 04:18:08.283666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043b80 00:22:59.615 [2024-07-23 04:18:08.283682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.615 [2024-07-23 04:18:08.284233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.615 [2024-07-23 04:18:08.284258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:59.615 [2024-07-23 04:18:08.284342] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:59.615 [2024-07-23 04:18:08.284367] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:59.615 pt3 00:22:59.615 04:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:59.615 04:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:59.615 04:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:59.874 [2024-07-23 04:18:08.508220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:59.874 [2024-07-23 04:18:08.508268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.874 [2024-07-23 04:18:08.508292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:22:59.875 [2024-07-23 04:18:08.508306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.875 [2024-07-23 04:18:08.508792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.875 [2024-07-23 04:18:08.508815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:59.875 [2024-07-23 04:18:08.508901] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:59.875 [2024-07-23 04:18:08.508925] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:59.875 [2024-07-23 04:18:08.509102] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:22:59.875 [2024-07-23 04:18:08.509117] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:59.875 [2024-07-23 04:18:08.509432] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:22:59.875 [2024-07-23 04:18:08.509649] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:22:59.875 [2024-07-23 04:18:08.509668] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:22:59.875 [2024-07-23 04:18:08.509841] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.875 pt4 00:22:59.875 04:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:59.875 04:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:59.875 04:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:59.875 04:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:59.875 04:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:59.875 04:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:59.875 04:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:59.875 04:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:59.875 04:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:59.875 04:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:59.875 04:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:59.875 04:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:59.875 04:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.875 04:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.134 04:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:00.134 "name": "raid_bdev1", 00:23:00.134 "uuid": "878535d3-e75a-4d5f-8f58-dd07618973af", 00:23:00.134 "strip_size_kb": 64, 00:23:00.134 "state": "online", 00:23:00.134 "raid_level": "raid0", 00:23:00.134 "superblock": true, 00:23:00.134 "num_base_bdevs": 4, 00:23:00.134 "num_base_bdevs_discovered": 4, 00:23:00.134 "num_base_bdevs_operational": 4, 00:23:00.134 "base_bdevs_list": [ 00:23:00.134 { 00:23:00.134 "name": "pt1", 00:23:00.134 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:00.134 "is_configured": true, 00:23:00.134 "data_offset": 2048, 00:23:00.134 "data_size": 63488 00:23:00.134 }, 00:23:00.134 { 00:23:00.134 "name": "pt2", 00:23:00.134 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:00.134 "is_configured": true, 00:23:00.134 "data_offset": 2048, 00:23:00.134 "data_size": 63488 00:23:00.134 }, 00:23:00.134 { 00:23:00.134 "name": "pt3", 00:23:00.134 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:00.134 "is_configured": true, 00:23:00.134 "data_offset": 2048, 00:23:00.134 "data_size": 63488 00:23:00.134 }, 00:23:00.134 { 00:23:00.134 "name": "pt4", 00:23:00.134 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:00.134 "is_configured": true, 00:23:00.134 "data_offset": 2048, 00:23:00.134 "data_size": 63488 00:23:00.134 } 00:23:00.134 ] 00:23:00.134 }' 00:23:00.134 04:18:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:00.134 04:18:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.702 04:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:23:00.702 04:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:00.702 04:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:00.702 04:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:00.702 04:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:00.702 04:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:00.702 04:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:00.702 04:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:00.961 [2024-07-23 04:18:09.559472] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:00.961 04:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:00.961 "name": "raid_bdev1", 00:23:00.961 "aliases": [ 00:23:00.961 "878535d3-e75a-4d5f-8f58-dd07618973af" 00:23:00.961 ], 00:23:00.961 "product_name": "Raid Volume", 00:23:00.961 "block_size": 512, 00:23:00.961 "num_blocks": 253952, 00:23:00.961 "uuid": "878535d3-e75a-4d5f-8f58-dd07618973af", 00:23:00.961 "assigned_rate_limits": { 00:23:00.961 "rw_ios_per_sec": 0, 00:23:00.961 "rw_mbytes_per_sec": 0, 00:23:00.961 "r_mbytes_per_sec": 0, 00:23:00.961 "w_mbytes_per_sec": 0 00:23:00.961 }, 00:23:00.961 "claimed": false, 00:23:00.961 "zoned": false, 00:23:00.961 "supported_io_types": { 00:23:00.961 "read": true, 00:23:00.961 "write": true, 00:23:00.961 "unmap": true, 00:23:00.961 "flush": true, 00:23:00.961 "reset": true, 00:23:00.961 "nvme_admin": false, 00:23:00.961 "nvme_io": false, 00:23:00.961 "nvme_io_md": false, 00:23:00.961 "write_zeroes": true, 00:23:00.961 "zcopy": false, 00:23:00.961 "get_zone_info": false, 00:23:00.961 "zone_management": false, 00:23:00.961 "zone_append": false, 00:23:00.961 "compare": false, 00:23:00.961 "compare_and_write": false, 00:23:00.961 "abort": false, 00:23:00.961 "seek_hole": false, 00:23:00.961 "seek_data": false, 00:23:00.961 "copy": false, 00:23:00.961 "nvme_iov_md": false 00:23:00.961 }, 00:23:00.961 "memory_domains": [ 00:23:00.961 { 00:23:00.961 "dma_device_id": "system", 00:23:00.961 "dma_device_type": 1 00:23:00.961 }, 00:23:00.961 { 00:23:00.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.961 "dma_device_type": 2 00:23:00.961 }, 00:23:00.961 { 00:23:00.961 "dma_device_id": "system", 00:23:00.961 "dma_device_type": 1 00:23:00.961 }, 00:23:00.961 { 00:23:00.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.962 "dma_device_type": 2 00:23:00.962 }, 00:23:00.962 { 00:23:00.962 "dma_device_id": "system", 00:23:00.962 "dma_device_type": 1 00:23:00.962 }, 00:23:00.962 { 00:23:00.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.962 "dma_device_type": 2 00:23:00.962 }, 00:23:00.962 { 00:23:00.962 "dma_device_id": "system", 00:23:00.962 "dma_device_type": 1 00:23:00.962 }, 00:23:00.962 { 00:23:00.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.962 "dma_device_type": 2 00:23:00.962 } 00:23:00.962 ], 00:23:00.962 "driver_specific": { 00:23:00.962 "raid": { 00:23:00.962 "uuid": "878535d3-e75a-4d5f-8f58-dd07618973af", 00:23:00.962 "strip_size_kb": 64, 00:23:00.962 "state": "online", 00:23:00.962 "raid_level": "raid0", 00:23:00.962 "superblock": true, 00:23:00.962 "num_base_bdevs": 4, 00:23:00.962 "num_base_bdevs_discovered": 4, 00:23:00.962 "num_base_bdevs_operational": 4, 00:23:00.962 "base_bdevs_list": [ 00:23:00.962 { 00:23:00.962 "name": "pt1", 00:23:00.962 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:00.962 "is_configured": true, 00:23:00.962 "data_offset": 2048, 00:23:00.962 "data_size": 63488 00:23:00.962 }, 00:23:00.962 { 00:23:00.962 "name": "pt2", 00:23:00.962 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:00.962 "is_configured": true, 00:23:00.962 "data_offset": 2048, 00:23:00.962 "data_size": 63488 00:23:00.962 }, 00:23:00.962 { 00:23:00.962 "name": "pt3", 00:23:00.962 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:00.962 "is_configured": true, 00:23:00.962 "data_offset": 2048, 00:23:00.962 "data_size": 63488 00:23:00.962 }, 00:23:00.962 { 00:23:00.962 "name": "pt4", 00:23:00.962 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:00.962 "is_configured": true, 00:23:00.962 "data_offset": 2048, 00:23:00.962 "data_size": 63488 00:23:00.962 } 00:23:00.962 ] 00:23:00.962 } 00:23:00.962 } 00:23:00.962 }' 00:23:00.962 04:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:00.962 04:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:00.962 pt2 00:23:00.962 pt3 00:23:00.962 pt4' 00:23:00.962 04:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:00.962 04:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:00.962 04:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:01.221 04:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:01.221 "name": "pt1", 00:23:01.221 "aliases": [ 00:23:01.221 "00000000-0000-0000-0000-000000000001" 00:23:01.221 ], 00:23:01.221 "product_name": "passthru", 00:23:01.221 "block_size": 512, 00:23:01.221 "num_blocks": 65536, 00:23:01.221 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:01.221 "assigned_rate_limits": { 00:23:01.221 "rw_ios_per_sec": 0, 00:23:01.221 "rw_mbytes_per_sec": 0, 00:23:01.221 "r_mbytes_per_sec": 0, 00:23:01.221 "w_mbytes_per_sec": 0 00:23:01.221 }, 00:23:01.221 "claimed": true, 00:23:01.221 "claim_type": "exclusive_write", 00:23:01.221 "zoned": false, 00:23:01.221 "supported_io_types": { 00:23:01.221 "read": true, 00:23:01.221 "write": true, 00:23:01.221 "unmap": true, 00:23:01.221 "flush": true, 00:23:01.221 "reset": true, 00:23:01.221 "nvme_admin": false, 00:23:01.221 "nvme_io": false, 00:23:01.221 "nvme_io_md": false, 00:23:01.221 "write_zeroes": true, 00:23:01.221 "zcopy": true, 00:23:01.221 "get_zone_info": false, 00:23:01.221 "zone_management": false, 00:23:01.221 "zone_append": false, 00:23:01.221 "compare": false, 00:23:01.221 "compare_and_write": false, 00:23:01.221 "abort": true, 00:23:01.221 "seek_hole": false, 00:23:01.221 "seek_data": false, 00:23:01.221 "copy": true, 00:23:01.221 "nvme_iov_md": false 00:23:01.221 }, 00:23:01.221 "memory_domains": [ 00:23:01.221 { 00:23:01.221 "dma_device_id": "system", 00:23:01.221 "dma_device_type": 1 00:23:01.221 }, 00:23:01.221 { 00:23:01.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:01.221 "dma_device_type": 2 00:23:01.221 } 00:23:01.221 ], 00:23:01.221 "driver_specific": { 00:23:01.221 "passthru": { 00:23:01.221 "name": "pt1", 00:23:01.221 "base_bdev_name": "malloc1" 00:23:01.221 } 00:23:01.221 } 00:23:01.221 }' 00:23:01.221 04:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:01.221 04:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:01.222 04:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:01.222 04:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:01.222 04:18:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:01.479 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:01.479 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:01.479 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:01.479 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:01.479 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:01.479 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:01.479 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:01.479 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:01.479 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:01.479 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:01.738 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:01.738 "name": "pt2", 00:23:01.738 "aliases": [ 00:23:01.738 "00000000-0000-0000-0000-000000000002" 00:23:01.738 ], 00:23:01.738 "product_name": "passthru", 00:23:01.738 "block_size": 512, 00:23:01.738 "num_blocks": 65536, 00:23:01.738 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:01.738 "assigned_rate_limits": { 00:23:01.738 "rw_ios_per_sec": 0, 00:23:01.738 "rw_mbytes_per_sec": 0, 00:23:01.738 "r_mbytes_per_sec": 0, 00:23:01.738 "w_mbytes_per_sec": 0 00:23:01.738 }, 00:23:01.738 "claimed": true, 00:23:01.738 "claim_type": "exclusive_write", 00:23:01.738 "zoned": false, 00:23:01.738 "supported_io_types": { 00:23:01.738 "read": true, 00:23:01.738 "write": true, 00:23:01.738 "unmap": true, 00:23:01.738 "flush": true, 00:23:01.738 "reset": true, 00:23:01.738 "nvme_admin": false, 00:23:01.738 "nvme_io": false, 00:23:01.738 "nvme_io_md": false, 00:23:01.738 "write_zeroes": true, 00:23:01.738 "zcopy": true, 00:23:01.738 "get_zone_info": false, 00:23:01.738 "zone_management": false, 00:23:01.738 "zone_append": false, 00:23:01.738 "compare": false, 00:23:01.738 "compare_and_write": false, 00:23:01.738 "abort": true, 00:23:01.738 "seek_hole": false, 00:23:01.738 "seek_data": false, 00:23:01.738 "copy": true, 00:23:01.738 "nvme_iov_md": false 00:23:01.738 }, 00:23:01.738 "memory_domains": [ 00:23:01.738 { 00:23:01.738 "dma_device_id": "system", 00:23:01.738 "dma_device_type": 1 00:23:01.738 }, 00:23:01.738 { 00:23:01.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:01.738 "dma_device_type": 2 00:23:01.738 } 00:23:01.738 ], 00:23:01.738 "driver_specific": { 00:23:01.738 "passthru": { 00:23:01.738 "name": "pt2", 00:23:01.738 "base_bdev_name": "malloc2" 00:23:01.738 } 00:23:01.738 } 00:23:01.738 }' 00:23:01.738 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:01.738 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:01.738 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:01.738 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:01.997 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:01.997 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:01.997 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:01.997 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:01.997 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:01.997 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:01.997 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:01.997 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:01.997 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:01.997 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:01.997 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:02.256 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:02.256 "name": "pt3", 00:23:02.256 "aliases": [ 00:23:02.256 "00000000-0000-0000-0000-000000000003" 00:23:02.256 ], 00:23:02.256 "product_name": "passthru", 00:23:02.256 "block_size": 512, 00:23:02.256 "num_blocks": 65536, 00:23:02.256 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:02.256 "assigned_rate_limits": { 00:23:02.256 "rw_ios_per_sec": 0, 00:23:02.256 "rw_mbytes_per_sec": 0, 00:23:02.256 "r_mbytes_per_sec": 0, 00:23:02.256 "w_mbytes_per_sec": 0 00:23:02.256 }, 00:23:02.256 "claimed": true, 00:23:02.256 "claim_type": "exclusive_write", 00:23:02.256 "zoned": false, 00:23:02.256 "supported_io_types": { 00:23:02.256 "read": true, 00:23:02.256 "write": true, 00:23:02.256 "unmap": true, 00:23:02.256 "flush": true, 00:23:02.256 "reset": true, 00:23:02.256 "nvme_admin": false, 00:23:02.256 "nvme_io": false, 00:23:02.256 "nvme_io_md": false, 00:23:02.256 "write_zeroes": true, 00:23:02.256 "zcopy": true, 00:23:02.256 "get_zone_info": false, 00:23:02.256 "zone_management": false, 00:23:02.256 "zone_append": false, 00:23:02.256 "compare": false, 00:23:02.256 "compare_and_write": false, 00:23:02.256 "abort": true, 00:23:02.256 "seek_hole": false, 00:23:02.256 "seek_data": false, 00:23:02.256 "copy": true, 00:23:02.256 "nvme_iov_md": false 00:23:02.256 }, 00:23:02.256 "memory_domains": [ 00:23:02.256 { 00:23:02.256 "dma_device_id": "system", 00:23:02.256 "dma_device_type": 1 00:23:02.256 }, 00:23:02.256 { 00:23:02.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.256 "dma_device_type": 2 00:23:02.256 } 00:23:02.256 ], 00:23:02.256 "driver_specific": { 00:23:02.256 "passthru": { 00:23:02.256 "name": "pt3", 00:23:02.256 "base_bdev_name": "malloc3" 00:23:02.256 } 00:23:02.256 } 00:23:02.256 }' 00:23:02.256 04:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:02.256 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:02.516 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:02.516 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:02.516 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:02.516 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:02.516 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:02.516 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:02.516 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:02.516 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:02.516 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:02.775 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:02.775 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:02.775 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:23:02.775 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:02.775 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:02.775 "name": "pt4", 00:23:02.775 "aliases": [ 00:23:02.775 "00000000-0000-0000-0000-000000000004" 00:23:02.775 ], 00:23:02.775 "product_name": "passthru", 00:23:02.775 "block_size": 512, 00:23:02.775 "num_blocks": 65536, 00:23:02.775 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:02.775 "assigned_rate_limits": { 00:23:02.775 "rw_ios_per_sec": 0, 00:23:02.775 "rw_mbytes_per_sec": 0, 00:23:02.775 "r_mbytes_per_sec": 0, 00:23:02.775 "w_mbytes_per_sec": 0 00:23:02.775 }, 00:23:02.775 "claimed": true, 00:23:02.775 "claim_type": "exclusive_write", 00:23:02.775 "zoned": false, 00:23:02.775 "supported_io_types": { 00:23:02.775 "read": true, 00:23:02.775 "write": true, 00:23:02.775 "unmap": true, 00:23:02.775 "flush": true, 00:23:02.775 "reset": true, 00:23:02.775 "nvme_admin": false, 00:23:02.775 "nvme_io": false, 00:23:02.775 "nvme_io_md": false, 00:23:02.775 "write_zeroes": true, 00:23:02.775 "zcopy": true, 00:23:02.775 "get_zone_info": false, 00:23:02.775 "zone_management": false, 00:23:02.775 "zone_append": false, 00:23:02.775 "compare": false, 00:23:02.775 "compare_and_write": false, 00:23:02.775 "abort": true, 00:23:02.775 "seek_hole": false, 00:23:02.775 "seek_data": false, 00:23:02.775 "copy": true, 00:23:02.775 "nvme_iov_md": false 00:23:02.775 }, 00:23:02.775 "memory_domains": [ 00:23:02.775 { 00:23:02.775 "dma_device_id": "system", 00:23:02.775 "dma_device_type": 1 00:23:02.775 }, 00:23:02.775 { 00:23:02.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.775 "dma_device_type": 2 00:23:02.775 } 00:23:02.775 ], 00:23:02.775 "driver_specific": { 00:23:02.775 "passthru": { 00:23:02.775 "name": "pt4", 00:23:02.775 "base_bdev_name": "malloc4" 00:23:02.775 } 00:23:02.775 } 00:23:02.775 }' 00:23:02.775 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:03.034 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:03.034 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:03.034 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.034 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.034 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:03.034 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.034 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.034 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:03.034 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.293 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.293 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:03.293 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:03.293 04:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:23:03.553 [2024-07-23 04:18:12.086351] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:03.553 04:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 878535d3-e75a-4d5f-8f58-dd07618973af '!=' 878535d3-e75a-4d5f-8f58-dd07618973af ']' 00:23:03.553 04:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:23:03.553 04:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:03.553 04:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:03.553 04:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2711212 00:23:03.553 04:18:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2711212 ']' 00:23:03.553 04:18:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2711212 00:23:03.553 04:18:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:23:03.553 04:18:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:03.553 04:18:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2711212 00:23:03.553 04:18:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:03.553 04:18:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:03.553 04:18:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2711212' 00:23:03.553 killing process with pid 2711212 00:23:03.553 04:18:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2711212 00:23:03.553 [2024-07-23 04:18:12.165582] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:03.553 [2024-07-23 04:18:12.165675] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:03.553 04:18:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2711212 00:23:03.553 [2024-07-23 04:18:12.165761] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:03.553 [2024-07-23 04:18:12.165777] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:23:03.812 [2024-07-23 04:18:12.592094] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:05.717 04:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:23:05.717 00:23:05.717 real 0m17.554s 00:23:05.717 user 0m29.680s 00:23:05.717 sys 0m2.998s 00:23:05.717 04:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:05.717 04:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.717 ************************************ 00:23:05.717 END TEST raid_superblock_test 00:23:05.717 ************************************ 00:23:05.717 04:18:14 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:05.717 04:18:14 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:23:05.717 04:18:14 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:05.717 04:18:14 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:05.717 04:18:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:05.717 ************************************ 00:23:05.717 START TEST raid_read_error_test 00:23:05.717 ************************************ 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 read 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.8NxfsnurOA 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2714980 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2714980 /var/tmp/spdk-raid.sock 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2714980 ']' 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:05.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:05.717 04:18:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.717 [2024-07-23 04:18:14.454336] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:23:05.717 [2024-07-23 04:18:14.454460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2714980 ] 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:05.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.977 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:05.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.978 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:05.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.978 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:05.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.978 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:05.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.978 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:05.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.978 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:05.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.978 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:05.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.978 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:05.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:05.978 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:05.978 [2024-07-23 04:18:14.680977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.237 [2024-07-23 04:18:14.962929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.806 [2024-07-23 04:18:15.313266] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:06.806 [2024-07-23 04:18:15.313302] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:06.806 04:18:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:06.806 04:18:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:23:06.806 04:18:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:06.806 04:18:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:07.065 BaseBdev1_malloc 00:23:07.065 04:18:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:07.324 true 00:23:07.324 04:18:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:07.584 [2024-07-23 04:18:16.212776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:07.584 [2024-07-23 04:18:16.212836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.584 [2024-07-23 04:18:16.212863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:23:07.584 [2024-07-23 04:18:16.212885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.584 [2024-07-23 04:18:16.215675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.584 [2024-07-23 04:18:16.215713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:07.584 BaseBdev1 00:23:07.584 04:18:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:07.584 04:18:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:07.843 BaseBdev2_malloc 00:23:07.843 04:18:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:08.102 true 00:23:08.102 04:18:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:08.361 [2024-07-23 04:18:16.925176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:08.361 [2024-07-23 04:18:16.925235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.361 [2024-07-23 04:18:16.925261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:23:08.362 [2024-07-23 04:18:16.925282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.362 [2024-07-23 04:18:16.928027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.362 [2024-07-23 04:18:16.928065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:08.362 BaseBdev2 00:23:08.362 04:18:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:08.362 04:18:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:08.621 BaseBdev3_malloc 00:23:08.621 04:18:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:08.933 true 00:23:08.933 04:18:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:08.933 [2024-07-23 04:18:17.657277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:08.933 [2024-07-23 04:18:17.657333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.933 [2024-07-23 04:18:17.657360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:23:08.933 [2024-07-23 04:18:17.657377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.933 [2024-07-23 04:18:17.660186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.933 [2024-07-23 04:18:17.660224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:08.933 BaseBdev3 00:23:08.933 04:18:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:08.933 04:18:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:09.191 BaseBdev4_malloc 00:23:09.192 04:18:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:23:09.759 true 00:23:09.759 04:18:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:10.018 [2024-07-23 04:18:18.690255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:10.018 [2024-07-23 04:18:18.690320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:10.018 [2024-07-23 04:18:18.690348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:23:10.018 [2024-07-23 04:18:18.690367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:10.018 [2024-07-23 04:18:18.693161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:10.018 [2024-07-23 04:18:18.693199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:10.018 BaseBdev4 00:23:10.018 04:18:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:23:10.587 [2024-07-23 04:18:19.187613] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:10.587 [2024-07-23 04:18:19.189967] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:10.587 [2024-07-23 04:18:19.190066] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:10.587 [2024-07-23 04:18:19.190157] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:10.587 [2024-07-23 04:18:19.190464] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042c80 00:23:10.587 [2024-07-23 04:18:19.190484] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:10.587 [2024-07-23 04:18:19.190819] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:23:10.587 [2024-07-23 04:18:19.191094] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042c80 00:23:10.587 [2024-07-23 04:18:19.191111] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042c80 00:23:10.587 [2024-07-23 04:18:19.191325] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:10.587 04:18:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:10.587 04:18:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:10.587 04:18:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:10.587 04:18:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:10.587 04:18:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:10.587 04:18:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:10.587 04:18:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:10.587 04:18:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:10.587 04:18:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:10.587 04:18:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:10.587 04:18:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.587 04:18:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.846 04:18:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:10.846 "name": "raid_bdev1", 00:23:10.846 "uuid": "574c3455-7940-43be-9a39-07c54f5e53d4", 00:23:10.846 "strip_size_kb": 64, 00:23:10.846 "state": "online", 00:23:10.846 "raid_level": "raid0", 00:23:10.846 "superblock": true, 00:23:10.846 "num_base_bdevs": 4, 00:23:10.846 "num_base_bdevs_discovered": 4, 00:23:10.846 "num_base_bdevs_operational": 4, 00:23:10.846 "base_bdevs_list": [ 00:23:10.846 { 00:23:10.846 "name": "BaseBdev1", 00:23:10.846 "uuid": "5970dbd0-5171-53ad-b0c5-b9f9759a5f11", 00:23:10.846 "is_configured": true, 00:23:10.846 "data_offset": 2048, 00:23:10.846 "data_size": 63488 00:23:10.846 }, 00:23:10.846 { 00:23:10.846 "name": "BaseBdev2", 00:23:10.846 "uuid": "dc17375d-7e7a-5e84-acd9-8dce469bf976", 00:23:10.846 "is_configured": true, 00:23:10.846 "data_offset": 2048, 00:23:10.846 "data_size": 63488 00:23:10.846 }, 00:23:10.846 { 00:23:10.846 "name": "BaseBdev3", 00:23:10.846 "uuid": "946259a2-f57c-540c-bc66-237b65c060e4", 00:23:10.846 "is_configured": true, 00:23:10.846 "data_offset": 2048, 00:23:10.846 "data_size": 63488 00:23:10.846 }, 00:23:10.846 { 00:23:10.846 "name": "BaseBdev4", 00:23:10.846 "uuid": "613ca791-0b1e-5c5e-8bcd-20ac30f7bfa6", 00:23:10.846 "is_configured": true, 00:23:10.846 "data_offset": 2048, 00:23:10.846 "data_size": 63488 00:23:10.846 } 00:23:10.846 ] 00:23:10.846 }' 00:23:10.846 04:18:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:10.846 04:18:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.413 04:18:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:23:11.413 04:18:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:11.414 [2024-07-23 04:18:20.092090] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:23:12.347 04:18:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:23:12.913 04:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:23:12.913 04:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:23:12.913 04:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:23:12.913 04:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:12.913 04:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:12.913 04:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:12.913 04:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:12.913 04:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:12.913 04:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:12.913 04:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:12.913 04:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:12.913 04:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:12.913 04:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:12.913 04:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.913 04:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.171 04:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:13.171 "name": "raid_bdev1", 00:23:13.171 "uuid": "574c3455-7940-43be-9a39-07c54f5e53d4", 00:23:13.171 "strip_size_kb": 64, 00:23:13.171 "state": "online", 00:23:13.171 "raid_level": "raid0", 00:23:13.171 "superblock": true, 00:23:13.171 "num_base_bdevs": 4, 00:23:13.171 "num_base_bdevs_discovered": 4, 00:23:13.171 "num_base_bdevs_operational": 4, 00:23:13.171 "base_bdevs_list": [ 00:23:13.171 { 00:23:13.171 "name": "BaseBdev1", 00:23:13.171 "uuid": "5970dbd0-5171-53ad-b0c5-b9f9759a5f11", 00:23:13.171 "is_configured": true, 00:23:13.171 "data_offset": 2048, 00:23:13.171 "data_size": 63488 00:23:13.171 }, 00:23:13.171 { 00:23:13.171 "name": "BaseBdev2", 00:23:13.171 "uuid": "dc17375d-7e7a-5e84-acd9-8dce469bf976", 00:23:13.171 "is_configured": true, 00:23:13.171 "data_offset": 2048, 00:23:13.171 "data_size": 63488 00:23:13.171 }, 00:23:13.171 { 00:23:13.171 "name": "BaseBdev3", 00:23:13.171 "uuid": "946259a2-f57c-540c-bc66-237b65c060e4", 00:23:13.171 "is_configured": true, 00:23:13.171 "data_offset": 2048, 00:23:13.171 "data_size": 63488 00:23:13.171 }, 00:23:13.171 { 00:23:13.171 "name": "BaseBdev4", 00:23:13.171 "uuid": "613ca791-0b1e-5c5e-8bcd-20ac30f7bfa6", 00:23:13.171 "is_configured": true, 00:23:13.171 "data_offset": 2048, 00:23:13.171 "data_size": 63488 00:23:13.171 } 00:23:13.171 ] 00:23:13.171 }' 00:23:13.171 04:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:13.172 04:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.739 04:18:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:13.739 [2024-07-23 04:18:22.511605] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:13.739 [2024-07-23 04:18:22.511647] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:13.739 [2024-07-23 04:18:22.514883] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:13.739 [2024-07-23 04:18:22.514944] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:13.739 [2024-07-23 04:18:22.514999] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:13.739 [2024-07-23 04:18:22.515024] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042c80 name raid_bdev1, state offline 00:23:13.739 0 00:23:13.997 04:18:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2714980 00:23:13.997 04:18:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2714980 ']' 00:23:13.997 04:18:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2714980 00:23:13.997 04:18:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:23:13.997 04:18:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:13.997 04:18:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2714980 00:23:13.997 04:18:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:13.997 04:18:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:13.997 04:18:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2714980' 00:23:13.997 killing process with pid 2714980 00:23:13.997 04:18:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2714980 00:23:13.997 [2024-07-23 04:18:22.581326] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:13.997 04:18:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2714980 00:23:14.256 [2024-07-23 04:18:22.920084] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:16.158 04:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.8NxfsnurOA 00:23:16.158 04:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:23:16.158 04:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:23:16.158 04:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.41 00:23:16.158 04:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:23:16.158 04:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:16.158 04:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:16.158 04:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.41 != \0\.\0\0 ]] 00:23:16.158 00:23:16.158 real 0m10.284s 00:23:16.158 user 0m15.150s 00:23:16.158 sys 0m1.510s 00:23:16.158 04:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:16.158 04:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.158 ************************************ 00:23:16.158 END TEST raid_read_error_test 00:23:16.159 ************************************ 00:23:16.159 04:18:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:16.159 04:18:24 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:23:16.159 04:18:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:16.159 04:18:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:16.159 04:18:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:16.159 ************************************ 00:23:16.159 START TEST raid_write_error_test 00:23:16.159 ************************************ 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 write 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.WMPrKm1baH 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2716836 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2716836 /var/tmp/spdk-raid.sock 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2716836 ']' 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:16.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:16.159 04:18:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.159 [2024-07-23 04:18:24.824100] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:23:16.159 [2024-07-23 04:18:24.824229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2716836 ] 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:16.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.418 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:16.418 [2024-07-23 04:18:25.050394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.677 [2024-07-23 04:18:25.313824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.936 [2024-07-23 04:18:25.633870] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:16.936 [2024-07-23 04:18:25.633906] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:17.195 04:18:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.195 04:18:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:23:17.195 04:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:17.195 04:18:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:17.454 BaseBdev1_malloc 00:23:17.454 04:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:17.712 true 00:23:17.712 04:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:17.970 [2024-07-23 04:18:26.521691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:17.970 [2024-07-23 04:18:26.521752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:17.970 [2024-07-23 04:18:26.521779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:23:17.970 [2024-07-23 04:18:26.521801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:17.970 [2024-07-23 04:18:26.524601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:17.970 [2024-07-23 04:18:26.524640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:17.970 BaseBdev1 00:23:17.970 04:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:17.970 04:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:18.229 BaseBdev2_malloc 00:23:18.229 04:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:18.487 true 00:23:18.487 04:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:18.487 [2024-07-23 04:18:27.254837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:18.487 [2024-07-23 04:18:27.254897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:18.487 [2024-07-23 04:18:27.254922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:23:18.487 [2024-07-23 04:18:27.254943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:18.487 [2024-07-23 04:18:27.257696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:18.487 [2024-07-23 04:18:27.257735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:18.487 BaseBdev2 00:23:18.745 04:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:18.745 04:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:19.313 BaseBdev3_malloc 00:23:19.313 04:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:19.313 true 00:23:19.313 04:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:19.880 [2024-07-23 04:18:28.533088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:19.880 [2024-07-23 04:18:28.533156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.880 [2024-07-23 04:18:28.533183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:23:19.880 [2024-07-23 04:18:28.533201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.880 [2024-07-23 04:18:28.536002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.880 [2024-07-23 04:18:28.536041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:19.880 BaseBdev3 00:23:19.880 04:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:19.880 04:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:20.138 BaseBdev4_malloc 00:23:20.138 04:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:23:20.703 true 00:23:20.703 04:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:21.271 [2024-07-23 04:18:29.820040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:21.271 [2024-07-23 04:18:29.820103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.271 [2024-07-23 04:18:29.820134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:23:21.271 [2024-07-23 04:18:29.820161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.271 [2024-07-23 04:18:29.822914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.271 [2024-07-23 04:18:29.822951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:21.271 BaseBdev4 00:23:21.271 04:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:23:21.530 [2024-07-23 04:18:30.056734] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:21.530 [2024-07-23 04:18:30.059105] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:21.530 [2024-07-23 04:18:30.059211] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:21.530 [2024-07-23 04:18:30.059292] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:21.530 [2024-07-23 04:18:30.059594] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042c80 00:23:21.530 [2024-07-23 04:18:30.059615] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:21.530 [2024-07-23 04:18:30.059970] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:23:21.530 [2024-07-23 04:18:30.060236] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042c80 00:23:21.530 [2024-07-23 04:18:30.060255] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042c80 00:23:21.530 [2024-07-23 04:18:30.060469] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:21.530 04:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:21.530 04:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:21.530 04:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:21.530 04:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:21.530 04:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:21.530 04:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:21.530 04:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:21.530 04:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:21.530 04:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:21.530 04:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:21.530 04:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.530 04:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.530 04:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:21.530 "name": "raid_bdev1", 00:23:21.530 "uuid": "42ab97ce-1e35-483e-a5b4-fce76670621e", 00:23:21.530 "strip_size_kb": 64, 00:23:21.530 "state": "online", 00:23:21.530 "raid_level": "raid0", 00:23:21.530 "superblock": true, 00:23:21.530 "num_base_bdevs": 4, 00:23:21.530 "num_base_bdevs_discovered": 4, 00:23:21.530 "num_base_bdevs_operational": 4, 00:23:21.530 "base_bdevs_list": [ 00:23:21.530 { 00:23:21.530 "name": "BaseBdev1", 00:23:21.530 "uuid": "3ab6108c-cdd5-543a-a335-ec1255df6dcf", 00:23:21.530 "is_configured": true, 00:23:21.530 "data_offset": 2048, 00:23:21.530 "data_size": 63488 00:23:21.530 }, 00:23:21.530 { 00:23:21.530 "name": "BaseBdev2", 00:23:21.530 "uuid": "3f957647-af58-5e45-b329-0c3be0931b79", 00:23:21.530 "is_configured": true, 00:23:21.530 "data_offset": 2048, 00:23:21.530 "data_size": 63488 00:23:21.530 }, 00:23:21.530 { 00:23:21.530 "name": "BaseBdev3", 00:23:21.530 "uuid": "434bb951-078b-587b-8889-a1fbbd681745", 00:23:21.530 "is_configured": true, 00:23:21.530 "data_offset": 2048, 00:23:21.530 "data_size": 63488 00:23:21.530 }, 00:23:21.530 { 00:23:21.530 "name": "BaseBdev4", 00:23:21.530 "uuid": "51ec10d9-abb6-5f16-8e0d-1f53f74e74ec", 00:23:21.530 "is_configured": true, 00:23:21.530 "data_offset": 2048, 00:23:21.530 "data_size": 63488 00:23:21.530 } 00:23:21.530 ] 00:23:21.530 }' 00:23:21.530 04:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:21.530 04:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.134 04:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:23:22.134 04:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:22.393 [2024-07-23 04:18:30.977321] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:23:23.330 04:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:23:23.330 04:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:23:23.330 04:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:23:23.330 04:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:23:23.330 04:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:23.330 04:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:23.330 04:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:23.330 04:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:23.330 04:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:23.330 04:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:23.330 04:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:23.330 04:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:23.330 04:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:23.330 04:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:23.592 04:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.592 04:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.592 04:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:23.592 "name": "raid_bdev1", 00:23:23.592 "uuid": "42ab97ce-1e35-483e-a5b4-fce76670621e", 00:23:23.592 "strip_size_kb": 64, 00:23:23.592 "state": "online", 00:23:23.592 "raid_level": "raid0", 00:23:23.592 "superblock": true, 00:23:23.592 "num_base_bdevs": 4, 00:23:23.592 "num_base_bdevs_discovered": 4, 00:23:23.592 "num_base_bdevs_operational": 4, 00:23:23.592 "base_bdevs_list": [ 00:23:23.592 { 00:23:23.592 "name": "BaseBdev1", 00:23:23.592 "uuid": "3ab6108c-cdd5-543a-a335-ec1255df6dcf", 00:23:23.592 "is_configured": true, 00:23:23.592 "data_offset": 2048, 00:23:23.592 "data_size": 63488 00:23:23.592 }, 00:23:23.592 { 00:23:23.592 "name": "BaseBdev2", 00:23:23.592 "uuid": "3f957647-af58-5e45-b329-0c3be0931b79", 00:23:23.592 "is_configured": true, 00:23:23.592 "data_offset": 2048, 00:23:23.592 "data_size": 63488 00:23:23.592 }, 00:23:23.592 { 00:23:23.592 "name": "BaseBdev3", 00:23:23.592 "uuid": "434bb951-078b-587b-8889-a1fbbd681745", 00:23:23.592 "is_configured": true, 00:23:23.592 "data_offset": 2048, 00:23:23.592 "data_size": 63488 00:23:23.592 }, 00:23:23.592 { 00:23:23.592 "name": "BaseBdev4", 00:23:23.592 "uuid": "51ec10d9-abb6-5f16-8e0d-1f53f74e74ec", 00:23:23.592 "is_configured": true, 00:23:23.592 "data_offset": 2048, 00:23:23.592 "data_size": 63488 00:23:23.592 } 00:23:23.592 ] 00:23:23.592 }' 00:23:23.592 04:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:23.592 04:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.160 04:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:24.419 [2024-07-23 04:18:33.137420] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:24.419 [2024-07-23 04:18:33.137469] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:24.419 [2024-07-23 04:18:33.140747] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:24.419 [2024-07-23 04:18:33.140809] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:24.419 [2024-07-23 04:18:33.140862] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:24.419 [2024-07-23 04:18:33.140891] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042c80 name raid_bdev1, state offline 00:23:24.419 0 00:23:24.419 04:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2716836 00:23:24.419 04:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2716836 ']' 00:23:24.419 04:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2716836 00:23:24.419 04:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:23:24.419 04:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:24.419 04:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2716836 00:23:24.677 04:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:24.677 04:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:24.677 04:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2716836' 00:23:24.677 killing process with pid 2716836 00:23:24.677 04:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2716836 00:23:24.677 [2024-07-23 04:18:33.213147] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:24.677 04:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2716836 00:23:24.936 [2024-07-23 04:18:33.569213] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:26.837 04:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.WMPrKm1baH 00:23:26.837 04:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:23:26.837 04:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:23:26.837 04:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.46 00:23:26.837 04:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:23:26.837 04:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:26.837 04:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:26.837 04:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.46 != \0\.\0\0 ]] 00:23:26.837 00:23:26.837 real 0m10.628s 00:23:26.837 user 0m15.724s 00:23:26.837 sys 0m1.610s 00:23:26.837 04:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:26.837 04:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.837 ************************************ 00:23:26.837 END TEST raid_write_error_test 00:23:26.837 ************************************ 00:23:26.837 04:18:35 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:26.837 04:18:35 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:23:26.837 04:18:35 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:23:26.837 04:18:35 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:26.837 04:18:35 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:26.837 04:18:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:26.837 ************************************ 00:23:26.837 START TEST raid_state_function_test 00:23:26.837 ************************************ 00:23:26.837 04:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 false 00:23:26.837 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:23:26.837 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:23:26.837 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:23:26.837 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:26.837 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:26.837 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:26.837 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:23:26.837 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:26.837 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:26.837 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2718613 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2718613' 00:23:26.838 Process raid pid: 2718613 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2718613 /var/tmp/spdk-raid.sock 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2718613 ']' 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:26.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.838 04:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:26.838 [2024-07-23 04:18:35.515189] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:23:26.838 [2024-07-23 04:18:35.515303] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.097 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:27.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.097 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:27.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.097 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:27.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.097 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:27.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.097 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:27.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.097 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:27.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.097 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:27.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.097 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:27.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.097 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:27.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.097 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:27.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.097 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:27.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.097 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:27.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.097 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:27.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.097 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:27.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.097 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:27.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.097 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:27.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.097 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:27.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.097 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:27.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.097 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:27.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.097 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:27.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.097 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:27.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.097 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:27.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.098 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:27.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.098 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:27.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.098 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:27.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.098 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:27.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.098 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:27.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.098 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:27.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.098 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:27.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.098 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:27.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.098 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:27.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:27.098 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:27.098 [2024-07-23 04:18:35.741961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.356 [2024-07-23 04:18:36.009259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.615 [2024-07-23 04:18:36.348124] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:27.615 [2024-07-23 04:18:36.348165] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:27.873 04:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:27.873 04:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:23:27.873 04:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:28.132 [2024-07-23 04:18:36.727960] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:28.132 [2024-07-23 04:18:36.728015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:28.132 [2024-07-23 04:18:36.728030] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:28.132 [2024-07-23 04:18:36.728047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:28.132 [2024-07-23 04:18:36.728059] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:28.132 [2024-07-23 04:18:36.728075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:28.132 [2024-07-23 04:18:36.728086] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:28.132 [2024-07-23 04:18:36.728103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:28.132 04:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:28.132 04:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:28.132 04:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:28.132 04:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:28.132 04:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:28.132 04:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:28.132 04:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:28.132 04:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:28.132 04:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:28.132 04:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:28.132 04:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.132 04:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:28.390 04:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:28.390 "name": "Existed_Raid", 00:23:28.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.390 "strip_size_kb": 64, 00:23:28.390 "state": "configuring", 00:23:28.390 "raid_level": "concat", 00:23:28.390 "superblock": false, 00:23:28.390 "num_base_bdevs": 4, 00:23:28.390 "num_base_bdevs_discovered": 0, 00:23:28.390 "num_base_bdevs_operational": 4, 00:23:28.390 "base_bdevs_list": [ 00:23:28.390 { 00:23:28.390 "name": "BaseBdev1", 00:23:28.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.390 "is_configured": false, 00:23:28.390 "data_offset": 0, 00:23:28.390 "data_size": 0 00:23:28.390 }, 00:23:28.390 { 00:23:28.390 "name": "BaseBdev2", 00:23:28.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.390 "is_configured": false, 00:23:28.390 "data_offset": 0, 00:23:28.390 "data_size": 0 00:23:28.390 }, 00:23:28.390 { 00:23:28.390 "name": "BaseBdev3", 00:23:28.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.390 "is_configured": false, 00:23:28.390 "data_offset": 0, 00:23:28.390 "data_size": 0 00:23:28.390 }, 00:23:28.390 { 00:23:28.390 "name": "BaseBdev4", 00:23:28.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.390 "is_configured": false, 00:23:28.390 "data_offset": 0, 00:23:28.390 "data_size": 0 00:23:28.390 } 00:23:28.390 ] 00:23:28.390 }' 00:23:28.390 04:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:28.390 04:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.957 04:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:29.215 [2024-07-23 04:18:37.758577] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:29.215 [2024-07-23 04:18:37.758618] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:23:29.215 04:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:29.215 [2024-07-23 04:18:37.971213] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:29.215 [2024-07-23 04:18:37.971259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:29.215 [2024-07-23 04:18:37.971273] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:29.215 [2024-07-23 04:18:37.971297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:29.215 [2024-07-23 04:18:37.971309] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:29.215 [2024-07-23 04:18:37.971324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:29.215 [2024-07-23 04:18:37.971336] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:29.215 [2024-07-23 04:18:37.971351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:29.215 04:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:29.781 [2024-07-23 04:18:38.524212] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:29.781 BaseBdev1 00:23:29.781 04:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:29.781 04:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:29.781 04:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:29.781 04:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:29.781 04:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:29.781 04:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:29.781 04:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:30.039 04:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:30.297 [ 00:23:30.297 { 00:23:30.297 "name": "BaseBdev1", 00:23:30.297 "aliases": [ 00:23:30.297 "334f85bb-9c81-438c-bb45-e01fb0724bb6" 00:23:30.297 ], 00:23:30.297 "product_name": "Malloc disk", 00:23:30.297 "block_size": 512, 00:23:30.297 "num_blocks": 65536, 00:23:30.297 "uuid": "334f85bb-9c81-438c-bb45-e01fb0724bb6", 00:23:30.297 "assigned_rate_limits": { 00:23:30.297 "rw_ios_per_sec": 0, 00:23:30.297 "rw_mbytes_per_sec": 0, 00:23:30.297 "r_mbytes_per_sec": 0, 00:23:30.297 "w_mbytes_per_sec": 0 00:23:30.297 }, 00:23:30.297 "claimed": true, 00:23:30.297 "claim_type": "exclusive_write", 00:23:30.297 "zoned": false, 00:23:30.297 "supported_io_types": { 00:23:30.297 "read": true, 00:23:30.297 "write": true, 00:23:30.297 "unmap": true, 00:23:30.297 "flush": true, 00:23:30.297 "reset": true, 00:23:30.297 "nvme_admin": false, 00:23:30.297 "nvme_io": false, 00:23:30.297 "nvme_io_md": false, 00:23:30.297 "write_zeroes": true, 00:23:30.297 "zcopy": true, 00:23:30.297 "get_zone_info": false, 00:23:30.297 "zone_management": false, 00:23:30.297 "zone_append": false, 00:23:30.297 "compare": false, 00:23:30.297 "compare_and_write": false, 00:23:30.297 "abort": true, 00:23:30.297 "seek_hole": false, 00:23:30.297 "seek_data": false, 00:23:30.297 "copy": true, 00:23:30.297 "nvme_iov_md": false 00:23:30.297 }, 00:23:30.297 "memory_domains": [ 00:23:30.297 { 00:23:30.297 "dma_device_id": "system", 00:23:30.297 "dma_device_type": 1 00:23:30.297 }, 00:23:30.297 { 00:23:30.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.297 "dma_device_type": 2 00:23:30.297 } 00:23:30.297 ], 00:23:30.297 "driver_specific": {} 00:23:30.297 } 00:23:30.297 ] 00:23:30.297 04:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:30.297 04:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:30.297 04:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:30.297 04:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:30.297 04:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:30.297 04:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:30.297 04:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:30.297 04:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:30.297 04:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:30.297 04:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:30.297 04:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:30.297 04:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.297 04:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:30.556 04:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:30.556 "name": "Existed_Raid", 00:23:30.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.556 "strip_size_kb": 64, 00:23:30.556 "state": "configuring", 00:23:30.556 "raid_level": "concat", 00:23:30.556 "superblock": false, 00:23:30.556 "num_base_bdevs": 4, 00:23:30.556 "num_base_bdevs_discovered": 1, 00:23:30.556 "num_base_bdevs_operational": 4, 00:23:30.556 "base_bdevs_list": [ 00:23:30.556 { 00:23:30.556 "name": "BaseBdev1", 00:23:30.556 "uuid": "334f85bb-9c81-438c-bb45-e01fb0724bb6", 00:23:30.556 "is_configured": true, 00:23:30.556 "data_offset": 0, 00:23:30.556 "data_size": 65536 00:23:30.556 }, 00:23:30.556 { 00:23:30.556 "name": "BaseBdev2", 00:23:30.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.556 "is_configured": false, 00:23:30.556 "data_offset": 0, 00:23:30.556 "data_size": 0 00:23:30.556 }, 00:23:30.556 { 00:23:30.556 "name": "BaseBdev3", 00:23:30.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.556 "is_configured": false, 00:23:30.556 "data_offset": 0, 00:23:30.556 "data_size": 0 00:23:30.556 }, 00:23:30.556 { 00:23:30.556 "name": "BaseBdev4", 00:23:30.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.556 "is_configured": false, 00:23:30.556 "data_offset": 0, 00:23:30.556 "data_size": 0 00:23:30.556 } 00:23:30.556 ] 00:23:30.556 }' 00:23:30.556 04:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:30.556 04:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.122 04:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:31.380 [2024-07-23 04:18:40.012254] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:31.380 [2024-07-23 04:18:40.012310] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:23:31.380 04:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:31.637 [2024-07-23 04:18:40.236950] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:31.637 [2024-07-23 04:18:40.239275] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:31.637 [2024-07-23 04:18:40.239320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:31.637 [2024-07-23 04:18:40.239335] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:31.637 [2024-07-23 04:18:40.239351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:31.637 [2024-07-23 04:18:40.239363] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:31.637 [2024-07-23 04:18:40.239386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:31.637 04:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:31.637 04:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:31.637 04:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:31.637 04:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:31.637 04:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:31.637 04:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:31.637 04:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:31.637 04:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:31.637 04:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:31.637 04:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:31.637 04:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:31.637 04:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:31.637 04:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.637 04:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:31.894 04:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:31.895 "name": "Existed_Raid", 00:23:31.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.895 "strip_size_kb": 64, 00:23:31.895 "state": "configuring", 00:23:31.895 "raid_level": "concat", 00:23:31.895 "superblock": false, 00:23:31.895 "num_base_bdevs": 4, 00:23:31.895 "num_base_bdevs_discovered": 1, 00:23:31.895 "num_base_bdevs_operational": 4, 00:23:31.895 "base_bdevs_list": [ 00:23:31.895 { 00:23:31.895 "name": "BaseBdev1", 00:23:31.895 "uuid": "334f85bb-9c81-438c-bb45-e01fb0724bb6", 00:23:31.895 "is_configured": true, 00:23:31.895 "data_offset": 0, 00:23:31.895 "data_size": 65536 00:23:31.895 }, 00:23:31.895 { 00:23:31.895 "name": "BaseBdev2", 00:23:31.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.895 "is_configured": false, 00:23:31.895 "data_offset": 0, 00:23:31.895 "data_size": 0 00:23:31.895 }, 00:23:31.895 { 00:23:31.895 "name": "BaseBdev3", 00:23:31.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.895 "is_configured": false, 00:23:31.895 "data_offset": 0, 00:23:31.895 "data_size": 0 00:23:31.895 }, 00:23:31.895 { 00:23:31.895 "name": "BaseBdev4", 00:23:31.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.895 "is_configured": false, 00:23:31.895 "data_offset": 0, 00:23:31.895 "data_size": 0 00:23:31.895 } 00:23:31.895 ] 00:23:31.895 }' 00:23:31.895 04:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:31.895 04:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.460 04:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:32.718 [2024-07-23 04:18:41.353218] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:32.718 BaseBdev2 00:23:32.718 04:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:32.718 04:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:32.718 04:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:32.718 04:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:32.718 04:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:32.719 04:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:32.719 04:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:32.977 04:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:33.234 [ 00:23:33.234 { 00:23:33.234 "name": "BaseBdev2", 00:23:33.234 "aliases": [ 00:23:33.234 "32c87492-a022-4d0b-9c9c-d7df4d9b47c1" 00:23:33.234 ], 00:23:33.234 "product_name": "Malloc disk", 00:23:33.234 "block_size": 512, 00:23:33.234 "num_blocks": 65536, 00:23:33.235 "uuid": "32c87492-a022-4d0b-9c9c-d7df4d9b47c1", 00:23:33.235 "assigned_rate_limits": { 00:23:33.235 "rw_ios_per_sec": 0, 00:23:33.235 "rw_mbytes_per_sec": 0, 00:23:33.235 "r_mbytes_per_sec": 0, 00:23:33.235 "w_mbytes_per_sec": 0 00:23:33.235 }, 00:23:33.235 "claimed": true, 00:23:33.235 "claim_type": "exclusive_write", 00:23:33.235 "zoned": false, 00:23:33.235 "supported_io_types": { 00:23:33.235 "read": true, 00:23:33.235 "write": true, 00:23:33.235 "unmap": true, 00:23:33.235 "flush": true, 00:23:33.235 "reset": true, 00:23:33.235 "nvme_admin": false, 00:23:33.235 "nvme_io": false, 00:23:33.235 "nvme_io_md": false, 00:23:33.235 "write_zeroes": true, 00:23:33.235 "zcopy": true, 00:23:33.235 "get_zone_info": false, 00:23:33.235 "zone_management": false, 00:23:33.235 "zone_append": false, 00:23:33.235 "compare": false, 00:23:33.235 "compare_and_write": false, 00:23:33.235 "abort": true, 00:23:33.235 "seek_hole": false, 00:23:33.235 "seek_data": false, 00:23:33.235 "copy": true, 00:23:33.235 "nvme_iov_md": false 00:23:33.235 }, 00:23:33.235 "memory_domains": [ 00:23:33.235 { 00:23:33.235 "dma_device_id": "system", 00:23:33.235 "dma_device_type": 1 00:23:33.235 }, 00:23:33.235 { 00:23:33.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:33.235 "dma_device_type": 2 00:23:33.235 } 00:23:33.235 ], 00:23:33.235 "driver_specific": {} 00:23:33.235 } 00:23:33.235 ] 00:23:33.235 04:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:33.235 04:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:33.235 04:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:33.235 04:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:33.235 04:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:33.235 04:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:33.235 04:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:33.235 04:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:33.235 04:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:33.235 04:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:33.235 04:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:33.235 04:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:33.235 04:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:33.235 04:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.235 04:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:33.493 04:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:33.493 "name": "Existed_Raid", 00:23:33.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.493 "strip_size_kb": 64, 00:23:33.493 "state": "configuring", 00:23:33.493 "raid_level": "concat", 00:23:33.493 "superblock": false, 00:23:33.493 "num_base_bdevs": 4, 00:23:33.493 "num_base_bdevs_discovered": 2, 00:23:33.493 "num_base_bdevs_operational": 4, 00:23:33.493 "base_bdevs_list": [ 00:23:33.493 { 00:23:33.493 "name": "BaseBdev1", 00:23:33.493 "uuid": "334f85bb-9c81-438c-bb45-e01fb0724bb6", 00:23:33.493 "is_configured": true, 00:23:33.493 "data_offset": 0, 00:23:33.493 "data_size": 65536 00:23:33.493 }, 00:23:33.493 { 00:23:33.493 "name": "BaseBdev2", 00:23:33.493 "uuid": "32c87492-a022-4d0b-9c9c-d7df4d9b47c1", 00:23:33.493 "is_configured": true, 00:23:33.493 "data_offset": 0, 00:23:33.493 "data_size": 65536 00:23:33.493 }, 00:23:33.493 { 00:23:33.493 "name": "BaseBdev3", 00:23:33.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.493 "is_configured": false, 00:23:33.493 "data_offset": 0, 00:23:33.493 "data_size": 0 00:23:33.493 }, 00:23:33.493 { 00:23:33.493 "name": "BaseBdev4", 00:23:33.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.493 "is_configured": false, 00:23:33.493 "data_offset": 0, 00:23:33.493 "data_size": 0 00:23:33.493 } 00:23:33.493 ] 00:23:33.493 }' 00:23:33.493 04:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:33.493 04:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.058 04:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:34.316 [2024-07-23 04:18:42.850205] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:34.316 BaseBdev3 00:23:34.316 04:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:23:34.316 04:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:34.316 04:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:34.316 04:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:34.316 04:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:34.316 04:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:34.316 04:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:34.316 04:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:34.574 [ 00:23:34.574 { 00:23:34.574 "name": "BaseBdev3", 00:23:34.574 "aliases": [ 00:23:34.574 "ca21e340-89be-4b2a-a43c-0eae5c1bbafa" 00:23:34.574 ], 00:23:34.574 "product_name": "Malloc disk", 00:23:34.574 "block_size": 512, 00:23:34.574 "num_blocks": 65536, 00:23:34.574 "uuid": "ca21e340-89be-4b2a-a43c-0eae5c1bbafa", 00:23:34.574 "assigned_rate_limits": { 00:23:34.574 "rw_ios_per_sec": 0, 00:23:34.574 "rw_mbytes_per_sec": 0, 00:23:34.574 "r_mbytes_per_sec": 0, 00:23:34.574 "w_mbytes_per_sec": 0 00:23:34.574 }, 00:23:34.574 "claimed": true, 00:23:34.574 "claim_type": "exclusive_write", 00:23:34.574 "zoned": false, 00:23:34.574 "supported_io_types": { 00:23:34.574 "read": true, 00:23:34.574 "write": true, 00:23:34.574 "unmap": true, 00:23:34.574 "flush": true, 00:23:34.574 "reset": true, 00:23:34.574 "nvme_admin": false, 00:23:34.574 "nvme_io": false, 00:23:34.574 "nvme_io_md": false, 00:23:34.574 "write_zeroes": true, 00:23:34.574 "zcopy": true, 00:23:34.574 "get_zone_info": false, 00:23:34.574 "zone_management": false, 00:23:34.574 "zone_append": false, 00:23:34.574 "compare": false, 00:23:34.574 "compare_and_write": false, 00:23:34.574 "abort": true, 00:23:34.574 "seek_hole": false, 00:23:34.574 "seek_data": false, 00:23:34.574 "copy": true, 00:23:34.574 "nvme_iov_md": false 00:23:34.574 }, 00:23:34.574 "memory_domains": [ 00:23:34.574 { 00:23:34.574 "dma_device_id": "system", 00:23:34.574 "dma_device_type": 1 00:23:34.574 }, 00:23:34.574 { 00:23:34.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:34.574 "dma_device_type": 2 00:23:34.574 } 00:23:34.574 ], 00:23:34.574 "driver_specific": {} 00:23:34.574 } 00:23:34.574 ] 00:23:34.574 04:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:34.574 04:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:34.574 04:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:34.574 04:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:34.574 04:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:34.574 04:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:34.574 04:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:34.574 04:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:34.574 04:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:34.574 04:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:34.574 04:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:34.574 04:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:34.574 04:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:34.574 04:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.574 04:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:34.832 04:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:34.832 "name": "Existed_Raid", 00:23:34.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.832 "strip_size_kb": 64, 00:23:34.832 "state": "configuring", 00:23:34.832 "raid_level": "concat", 00:23:34.832 "superblock": false, 00:23:34.832 "num_base_bdevs": 4, 00:23:34.832 "num_base_bdevs_discovered": 3, 00:23:34.832 "num_base_bdevs_operational": 4, 00:23:34.832 "base_bdevs_list": [ 00:23:34.832 { 00:23:34.832 "name": "BaseBdev1", 00:23:34.832 "uuid": "334f85bb-9c81-438c-bb45-e01fb0724bb6", 00:23:34.832 "is_configured": true, 00:23:34.832 "data_offset": 0, 00:23:34.832 "data_size": 65536 00:23:34.832 }, 00:23:34.832 { 00:23:34.832 "name": "BaseBdev2", 00:23:34.832 "uuid": "32c87492-a022-4d0b-9c9c-d7df4d9b47c1", 00:23:34.832 "is_configured": true, 00:23:34.832 "data_offset": 0, 00:23:34.832 "data_size": 65536 00:23:34.832 }, 00:23:34.832 { 00:23:34.832 "name": "BaseBdev3", 00:23:34.832 "uuid": "ca21e340-89be-4b2a-a43c-0eae5c1bbafa", 00:23:34.832 "is_configured": true, 00:23:34.832 "data_offset": 0, 00:23:34.832 "data_size": 65536 00:23:34.832 }, 00:23:34.832 { 00:23:34.832 "name": "BaseBdev4", 00:23:34.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.832 "is_configured": false, 00:23:34.832 "data_offset": 0, 00:23:34.832 "data_size": 0 00:23:34.832 } 00:23:34.832 ] 00:23:34.832 }' 00:23:34.832 04:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:34.832 04:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.399 04:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:35.658 [2024-07-23 04:18:44.387380] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:35.658 [2024-07-23 04:18:44.387431] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:23:35.658 [2024-07-23 04:18:44.387444] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:23:35.658 [2024-07-23 04:18:44.387764] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:23:35.658 [2024-07-23 04:18:44.388003] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:23:35.658 [2024-07-23 04:18:44.388021] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:23:35.658 [2024-07-23 04:18:44.388329] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:35.658 BaseBdev4 00:23:35.658 04:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:23:35.658 04:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:23:35.658 04:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:35.658 04:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:35.658 04:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:35.658 04:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:35.658 04:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:35.917 04:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:36.176 [ 00:23:36.176 { 00:23:36.176 "name": "BaseBdev4", 00:23:36.176 "aliases": [ 00:23:36.176 "94599027-f098-4c61-af10-8f571b4a6def" 00:23:36.176 ], 00:23:36.176 "product_name": "Malloc disk", 00:23:36.176 "block_size": 512, 00:23:36.177 "num_blocks": 65536, 00:23:36.177 "uuid": "94599027-f098-4c61-af10-8f571b4a6def", 00:23:36.177 "assigned_rate_limits": { 00:23:36.177 "rw_ios_per_sec": 0, 00:23:36.177 "rw_mbytes_per_sec": 0, 00:23:36.177 "r_mbytes_per_sec": 0, 00:23:36.177 "w_mbytes_per_sec": 0 00:23:36.177 }, 00:23:36.177 "claimed": true, 00:23:36.177 "claim_type": "exclusive_write", 00:23:36.177 "zoned": false, 00:23:36.177 "supported_io_types": { 00:23:36.177 "read": true, 00:23:36.177 "write": true, 00:23:36.177 "unmap": true, 00:23:36.177 "flush": true, 00:23:36.177 "reset": true, 00:23:36.177 "nvme_admin": false, 00:23:36.177 "nvme_io": false, 00:23:36.177 "nvme_io_md": false, 00:23:36.177 "write_zeroes": true, 00:23:36.177 "zcopy": true, 00:23:36.177 "get_zone_info": false, 00:23:36.177 "zone_management": false, 00:23:36.177 "zone_append": false, 00:23:36.177 "compare": false, 00:23:36.177 "compare_and_write": false, 00:23:36.177 "abort": true, 00:23:36.177 "seek_hole": false, 00:23:36.177 "seek_data": false, 00:23:36.177 "copy": true, 00:23:36.177 "nvme_iov_md": false 00:23:36.177 }, 00:23:36.177 "memory_domains": [ 00:23:36.177 { 00:23:36.177 "dma_device_id": "system", 00:23:36.177 "dma_device_type": 1 00:23:36.177 }, 00:23:36.177 { 00:23:36.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:36.177 "dma_device_type": 2 00:23:36.177 } 00:23:36.177 ], 00:23:36.177 "driver_specific": {} 00:23:36.177 } 00:23:36.177 ] 00:23:36.177 04:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:36.177 04:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:36.177 04:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:36.177 04:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:23:36.177 04:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:36.177 04:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:36.177 04:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:36.177 04:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:36.177 04:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:36.177 04:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:36.177 04:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:36.177 04:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:36.177 04:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:36.177 04:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.177 04:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:36.434 04:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:36.434 "name": "Existed_Raid", 00:23:36.434 "uuid": "2fbc64ec-99cb-4647-859a-bba9338bf5c1", 00:23:36.434 "strip_size_kb": 64, 00:23:36.434 "state": "online", 00:23:36.434 "raid_level": "concat", 00:23:36.434 "superblock": false, 00:23:36.434 "num_base_bdevs": 4, 00:23:36.434 "num_base_bdevs_discovered": 4, 00:23:36.434 "num_base_bdevs_operational": 4, 00:23:36.434 "base_bdevs_list": [ 00:23:36.434 { 00:23:36.434 "name": "BaseBdev1", 00:23:36.434 "uuid": "334f85bb-9c81-438c-bb45-e01fb0724bb6", 00:23:36.434 "is_configured": true, 00:23:36.434 "data_offset": 0, 00:23:36.434 "data_size": 65536 00:23:36.434 }, 00:23:36.434 { 00:23:36.434 "name": "BaseBdev2", 00:23:36.434 "uuid": "32c87492-a022-4d0b-9c9c-d7df4d9b47c1", 00:23:36.434 "is_configured": true, 00:23:36.434 "data_offset": 0, 00:23:36.434 "data_size": 65536 00:23:36.434 }, 00:23:36.434 { 00:23:36.434 "name": "BaseBdev3", 00:23:36.434 "uuid": "ca21e340-89be-4b2a-a43c-0eae5c1bbafa", 00:23:36.434 "is_configured": true, 00:23:36.434 "data_offset": 0, 00:23:36.434 "data_size": 65536 00:23:36.434 }, 00:23:36.434 { 00:23:36.434 "name": "BaseBdev4", 00:23:36.434 "uuid": "94599027-f098-4c61-af10-8f571b4a6def", 00:23:36.434 "is_configured": true, 00:23:36.434 "data_offset": 0, 00:23:36.434 "data_size": 65536 00:23:36.434 } 00:23:36.434 ] 00:23:36.434 }' 00:23:36.434 04:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:36.434 04:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.999 04:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:36.999 04:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:36.999 04:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:36.999 04:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:36.999 04:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:36.999 04:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:36.999 04:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:36.999 04:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:37.257 [2024-07-23 04:18:45.883909] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:37.257 04:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:37.257 "name": "Existed_Raid", 00:23:37.257 "aliases": [ 00:23:37.257 "2fbc64ec-99cb-4647-859a-bba9338bf5c1" 00:23:37.257 ], 00:23:37.257 "product_name": "Raid Volume", 00:23:37.257 "block_size": 512, 00:23:37.257 "num_blocks": 262144, 00:23:37.257 "uuid": "2fbc64ec-99cb-4647-859a-bba9338bf5c1", 00:23:37.257 "assigned_rate_limits": { 00:23:37.257 "rw_ios_per_sec": 0, 00:23:37.257 "rw_mbytes_per_sec": 0, 00:23:37.257 "r_mbytes_per_sec": 0, 00:23:37.257 "w_mbytes_per_sec": 0 00:23:37.257 }, 00:23:37.257 "claimed": false, 00:23:37.257 "zoned": false, 00:23:37.257 "supported_io_types": { 00:23:37.257 "read": true, 00:23:37.257 "write": true, 00:23:37.257 "unmap": true, 00:23:37.257 "flush": true, 00:23:37.257 "reset": true, 00:23:37.257 "nvme_admin": false, 00:23:37.257 "nvme_io": false, 00:23:37.257 "nvme_io_md": false, 00:23:37.257 "write_zeroes": true, 00:23:37.257 "zcopy": false, 00:23:37.257 "get_zone_info": false, 00:23:37.257 "zone_management": false, 00:23:37.257 "zone_append": false, 00:23:37.257 "compare": false, 00:23:37.257 "compare_and_write": false, 00:23:37.257 "abort": false, 00:23:37.257 "seek_hole": false, 00:23:37.257 "seek_data": false, 00:23:37.257 "copy": false, 00:23:37.257 "nvme_iov_md": false 00:23:37.257 }, 00:23:37.257 "memory_domains": [ 00:23:37.257 { 00:23:37.257 "dma_device_id": "system", 00:23:37.257 "dma_device_type": 1 00:23:37.257 }, 00:23:37.257 { 00:23:37.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:37.257 "dma_device_type": 2 00:23:37.257 }, 00:23:37.257 { 00:23:37.257 "dma_device_id": "system", 00:23:37.257 "dma_device_type": 1 00:23:37.257 }, 00:23:37.257 { 00:23:37.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:37.257 "dma_device_type": 2 00:23:37.257 }, 00:23:37.257 { 00:23:37.257 "dma_device_id": "system", 00:23:37.257 "dma_device_type": 1 00:23:37.257 }, 00:23:37.257 { 00:23:37.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:37.257 "dma_device_type": 2 00:23:37.257 }, 00:23:37.257 { 00:23:37.257 "dma_device_id": "system", 00:23:37.257 "dma_device_type": 1 00:23:37.257 }, 00:23:37.257 { 00:23:37.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:37.257 "dma_device_type": 2 00:23:37.257 } 00:23:37.257 ], 00:23:37.257 "driver_specific": { 00:23:37.257 "raid": { 00:23:37.257 "uuid": "2fbc64ec-99cb-4647-859a-bba9338bf5c1", 00:23:37.257 "strip_size_kb": 64, 00:23:37.257 "state": "online", 00:23:37.257 "raid_level": "concat", 00:23:37.257 "superblock": false, 00:23:37.257 "num_base_bdevs": 4, 00:23:37.257 "num_base_bdevs_discovered": 4, 00:23:37.257 "num_base_bdevs_operational": 4, 00:23:37.257 "base_bdevs_list": [ 00:23:37.257 { 00:23:37.257 "name": "BaseBdev1", 00:23:37.257 "uuid": "334f85bb-9c81-438c-bb45-e01fb0724bb6", 00:23:37.257 "is_configured": true, 00:23:37.257 "data_offset": 0, 00:23:37.257 "data_size": 65536 00:23:37.257 }, 00:23:37.257 { 00:23:37.257 "name": "BaseBdev2", 00:23:37.257 "uuid": "32c87492-a022-4d0b-9c9c-d7df4d9b47c1", 00:23:37.257 "is_configured": true, 00:23:37.257 "data_offset": 0, 00:23:37.257 "data_size": 65536 00:23:37.257 }, 00:23:37.258 { 00:23:37.258 "name": "BaseBdev3", 00:23:37.258 "uuid": "ca21e340-89be-4b2a-a43c-0eae5c1bbafa", 00:23:37.258 "is_configured": true, 00:23:37.258 "data_offset": 0, 00:23:37.258 "data_size": 65536 00:23:37.258 }, 00:23:37.258 { 00:23:37.258 "name": "BaseBdev4", 00:23:37.258 "uuid": "94599027-f098-4c61-af10-8f571b4a6def", 00:23:37.258 "is_configured": true, 00:23:37.258 "data_offset": 0, 00:23:37.258 "data_size": 65536 00:23:37.258 } 00:23:37.258 ] 00:23:37.258 } 00:23:37.258 } 00:23:37.258 }' 00:23:37.258 04:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:37.258 04:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:37.258 BaseBdev2 00:23:37.258 BaseBdev3 00:23:37.258 BaseBdev4' 00:23:37.258 04:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:37.258 04:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:37.258 04:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:37.516 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:37.516 "name": "BaseBdev1", 00:23:37.516 "aliases": [ 00:23:37.516 "334f85bb-9c81-438c-bb45-e01fb0724bb6" 00:23:37.516 ], 00:23:37.516 "product_name": "Malloc disk", 00:23:37.516 "block_size": 512, 00:23:37.516 "num_blocks": 65536, 00:23:37.516 "uuid": "334f85bb-9c81-438c-bb45-e01fb0724bb6", 00:23:37.516 "assigned_rate_limits": { 00:23:37.516 "rw_ios_per_sec": 0, 00:23:37.516 "rw_mbytes_per_sec": 0, 00:23:37.516 "r_mbytes_per_sec": 0, 00:23:37.516 "w_mbytes_per_sec": 0 00:23:37.516 }, 00:23:37.516 "claimed": true, 00:23:37.516 "claim_type": "exclusive_write", 00:23:37.516 "zoned": false, 00:23:37.516 "supported_io_types": { 00:23:37.516 "read": true, 00:23:37.516 "write": true, 00:23:37.516 "unmap": true, 00:23:37.516 "flush": true, 00:23:37.516 "reset": true, 00:23:37.516 "nvme_admin": false, 00:23:37.516 "nvme_io": false, 00:23:37.516 "nvme_io_md": false, 00:23:37.516 "write_zeroes": true, 00:23:37.516 "zcopy": true, 00:23:37.516 "get_zone_info": false, 00:23:37.516 "zone_management": false, 00:23:37.516 "zone_append": false, 00:23:37.516 "compare": false, 00:23:37.516 "compare_and_write": false, 00:23:37.516 "abort": true, 00:23:37.516 "seek_hole": false, 00:23:37.516 "seek_data": false, 00:23:37.516 "copy": true, 00:23:37.516 "nvme_iov_md": false 00:23:37.516 }, 00:23:37.516 "memory_domains": [ 00:23:37.516 { 00:23:37.516 "dma_device_id": "system", 00:23:37.516 "dma_device_type": 1 00:23:37.516 }, 00:23:37.516 { 00:23:37.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:37.516 "dma_device_type": 2 00:23:37.516 } 00:23:37.516 ], 00:23:37.516 "driver_specific": {} 00:23:37.516 }' 00:23:37.516 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:37.516 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:37.516 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:37.516 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:37.516 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:37.774 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:37.774 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:37.774 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:37.774 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:37.774 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:37.774 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:37.774 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:37.774 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:37.774 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:37.774 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:38.032 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:38.032 "name": "BaseBdev2", 00:23:38.032 "aliases": [ 00:23:38.032 "32c87492-a022-4d0b-9c9c-d7df4d9b47c1" 00:23:38.032 ], 00:23:38.032 "product_name": "Malloc disk", 00:23:38.032 "block_size": 512, 00:23:38.032 "num_blocks": 65536, 00:23:38.032 "uuid": "32c87492-a022-4d0b-9c9c-d7df4d9b47c1", 00:23:38.032 "assigned_rate_limits": { 00:23:38.032 "rw_ios_per_sec": 0, 00:23:38.032 "rw_mbytes_per_sec": 0, 00:23:38.032 "r_mbytes_per_sec": 0, 00:23:38.032 "w_mbytes_per_sec": 0 00:23:38.032 }, 00:23:38.032 "claimed": true, 00:23:38.032 "claim_type": "exclusive_write", 00:23:38.032 "zoned": false, 00:23:38.032 "supported_io_types": { 00:23:38.032 "read": true, 00:23:38.032 "write": true, 00:23:38.032 "unmap": true, 00:23:38.032 "flush": true, 00:23:38.032 "reset": true, 00:23:38.032 "nvme_admin": false, 00:23:38.032 "nvme_io": false, 00:23:38.032 "nvme_io_md": false, 00:23:38.032 "write_zeroes": true, 00:23:38.032 "zcopy": true, 00:23:38.032 "get_zone_info": false, 00:23:38.032 "zone_management": false, 00:23:38.032 "zone_append": false, 00:23:38.032 "compare": false, 00:23:38.032 "compare_and_write": false, 00:23:38.032 "abort": true, 00:23:38.032 "seek_hole": false, 00:23:38.032 "seek_data": false, 00:23:38.032 "copy": true, 00:23:38.032 "nvme_iov_md": false 00:23:38.033 }, 00:23:38.033 "memory_domains": [ 00:23:38.033 { 00:23:38.033 "dma_device_id": "system", 00:23:38.033 "dma_device_type": 1 00:23:38.033 }, 00:23:38.033 { 00:23:38.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.033 "dma_device_type": 2 00:23:38.033 } 00:23:38.033 ], 00:23:38.033 "driver_specific": {} 00:23:38.033 }' 00:23:38.033 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:38.033 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:38.290 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:38.290 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:38.290 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:38.290 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:38.290 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:38.290 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:38.290 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:38.290 04:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:38.290 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:38.549 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:38.549 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:38.549 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:38.549 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:38.549 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:38.549 "name": "BaseBdev3", 00:23:38.549 "aliases": [ 00:23:38.549 "ca21e340-89be-4b2a-a43c-0eae5c1bbafa" 00:23:38.549 ], 00:23:38.549 "product_name": "Malloc disk", 00:23:38.549 "block_size": 512, 00:23:38.549 "num_blocks": 65536, 00:23:38.549 "uuid": "ca21e340-89be-4b2a-a43c-0eae5c1bbafa", 00:23:38.549 "assigned_rate_limits": { 00:23:38.549 "rw_ios_per_sec": 0, 00:23:38.549 "rw_mbytes_per_sec": 0, 00:23:38.549 "r_mbytes_per_sec": 0, 00:23:38.549 "w_mbytes_per_sec": 0 00:23:38.549 }, 00:23:38.549 "claimed": true, 00:23:38.549 "claim_type": "exclusive_write", 00:23:38.549 "zoned": false, 00:23:38.549 "supported_io_types": { 00:23:38.549 "read": true, 00:23:38.549 "write": true, 00:23:38.549 "unmap": true, 00:23:38.549 "flush": true, 00:23:38.549 "reset": true, 00:23:38.549 "nvme_admin": false, 00:23:38.549 "nvme_io": false, 00:23:38.549 "nvme_io_md": false, 00:23:38.549 "write_zeroes": true, 00:23:38.549 "zcopy": true, 00:23:38.549 "get_zone_info": false, 00:23:38.549 "zone_management": false, 00:23:38.549 "zone_append": false, 00:23:38.549 "compare": false, 00:23:38.549 "compare_and_write": false, 00:23:38.549 "abort": true, 00:23:38.549 "seek_hole": false, 00:23:38.549 "seek_data": false, 00:23:38.549 "copy": true, 00:23:38.549 "nvme_iov_md": false 00:23:38.549 }, 00:23:38.549 "memory_domains": [ 00:23:38.549 { 00:23:38.549 "dma_device_id": "system", 00:23:38.549 "dma_device_type": 1 00:23:38.549 }, 00:23:38.549 { 00:23:38.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.549 "dma_device_type": 2 00:23:38.549 } 00:23:38.549 ], 00:23:38.549 "driver_specific": {} 00:23:38.549 }' 00:23:38.549 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:38.807 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:38.807 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:38.807 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:38.807 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:38.807 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:38.807 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:38.807 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:38.807 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:38.807 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:39.065 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:39.065 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:39.065 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:39.065 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:39.065 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:39.324 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:39.324 "name": "BaseBdev4", 00:23:39.324 "aliases": [ 00:23:39.324 "94599027-f098-4c61-af10-8f571b4a6def" 00:23:39.324 ], 00:23:39.324 "product_name": "Malloc disk", 00:23:39.324 "block_size": 512, 00:23:39.324 "num_blocks": 65536, 00:23:39.324 "uuid": "94599027-f098-4c61-af10-8f571b4a6def", 00:23:39.324 "assigned_rate_limits": { 00:23:39.324 "rw_ios_per_sec": 0, 00:23:39.324 "rw_mbytes_per_sec": 0, 00:23:39.324 "r_mbytes_per_sec": 0, 00:23:39.324 "w_mbytes_per_sec": 0 00:23:39.324 }, 00:23:39.324 "claimed": true, 00:23:39.324 "claim_type": "exclusive_write", 00:23:39.324 "zoned": false, 00:23:39.324 "supported_io_types": { 00:23:39.324 "read": true, 00:23:39.324 "write": true, 00:23:39.324 "unmap": true, 00:23:39.324 "flush": true, 00:23:39.324 "reset": true, 00:23:39.324 "nvme_admin": false, 00:23:39.324 "nvme_io": false, 00:23:39.324 "nvme_io_md": false, 00:23:39.324 "write_zeroes": true, 00:23:39.324 "zcopy": true, 00:23:39.324 "get_zone_info": false, 00:23:39.324 "zone_management": false, 00:23:39.324 "zone_append": false, 00:23:39.324 "compare": false, 00:23:39.324 "compare_and_write": false, 00:23:39.324 "abort": true, 00:23:39.324 "seek_hole": false, 00:23:39.324 "seek_data": false, 00:23:39.324 "copy": true, 00:23:39.324 "nvme_iov_md": false 00:23:39.324 }, 00:23:39.324 "memory_domains": [ 00:23:39.324 { 00:23:39.324 "dma_device_id": "system", 00:23:39.324 "dma_device_type": 1 00:23:39.324 }, 00:23:39.324 { 00:23:39.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:39.324 "dma_device_type": 2 00:23:39.324 } 00:23:39.324 ], 00:23:39.324 "driver_specific": {} 00:23:39.324 }' 00:23:39.324 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:39.324 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:39.324 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:39.324 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:39.324 04:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:39.324 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:39.324 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:39.324 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:39.582 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:39.582 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:39.582 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:39.582 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:39.582 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:39.840 [2024-07-23 04:18:48.438485] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:39.840 [2024-07-23 04:18:48.438521] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:39.840 [2024-07-23 04:18:48.438579] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:39.840 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:39.840 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:23:39.840 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:39.840 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:39.840 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:23:39.840 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:23:39.840 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:39.840 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:23:39.840 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:39.840 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:39.840 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:39.840 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:39.840 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:39.840 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:39.840 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:39.840 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.840 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:40.099 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:40.099 "name": "Existed_Raid", 00:23:40.099 "uuid": "2fbc64ec-99cb-4647-859a-bba9338bf5c1", 00:23:40.099 "strip_size_kb": 64, 00:23:40.099 "state": "offline", 00:23:40.099 "raid_level": "concat", 00:23:40.099 "superblock": false, 00:23:40.099 "num_base_bdevs": 4, 00:23:40.099 "num_base_bdevs_discovered": 3, 00:23:40.099 "num_base_bdevs_operational": 3, 00:23:40.099 "base_bdevs_list": [ 00:23:40.099 { 00:23:40.099 "name": null, 00:23:40.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.099 "is_configured": false, 00:23:40.099 "data_offset": 0, 00:23:40.099 "data_size": 65536 00:23:40.099 }, 00:23:40.099 { 00:23:40.099 "name": "BaseBdev2", 00:23:40.099 "uuid": "32c87492-a022-4d0b-9c9c-d7df4d9b47c1", 00:23:40.099 "is_configured": true, 00:23:40.099 "data_offset": 0, 00:23:40.099 "data_size": 65536 00:23:40.099 }, 00:23:40.099 { 00:23:40.099 "name": "BaseBdev3", 00:23:40.099 "uuid": "ca21e340-89be-4b2a-a43c-0eae5c1bbafa", 00:23:40.099 "is_configured": true, 00:23:40.099 "data_offset": 0, 00:23:40.099 "data_size": 65536 00:23:40.099 }, 00:23:40.099 { 00:23:40.099 "name": "BaseBdev4", 00:23:40.099 "uuid": "94599027-f098-4c61-af10-8f571b4a6def", 00:23:40.099 "is_configured": true, 00:23:40.099 "data_offset": 0, 00:23:40.099 "data_size": 65536 00:23:40.099 } 00:23:40.099 ] 00:23:40.099 }' 00:23:40.099 04:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:40.099 04:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.665 04:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:40.665 04:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:40.665 04:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.665 04:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:40.923 04:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:40.923 04:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:40.924 04:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:40.924 [2024-07-23 04:18:49.703135] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:41.182 04:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:41.182 04:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:41.182 04:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.182 04:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:41.440 04:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:41.440 04:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:41.440 04:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:42.007 [2024-07-23 04:18:50.575275] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:42.007 04:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:42.007 04:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:42.007 04:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.007 04:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:42.265 04:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:42.265 04:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:42.265 04:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:42.524 [2024-07-23 04:18:51.163277] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:42.524 [2024-07-23 04:18:51.163337] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:23:42.782 04:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:42.782 04:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:42.782 04:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.782 04:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:42.782 04:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:42.782 04:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:42.782 04:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:23:42.782 04:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:23:42.782 04:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:42.782 04:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:43.040 BaseBdev2 00:23:43.040 04:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:23:43.040 04:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:43.040 04:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:43.040 04:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:43.040 04:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:43.040 04:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:43.040 04:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:43.298 04:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:43.557 [ 00:23:43.557 { 00:23:43.557 "name": "BaseBdev2", 00:23:43.557 "aliases": [ 00:23:43.557 "ba9e2cf0-2ad7-420b-9ce4-bebf95b0e55b" 00:23:43.557 ], 00:23:43.557 "product_name": "Malloc disk", 00:23:43.557 "block_size": 512, 00:23:43.557 "num_blocks": 65536, 00:23:43.557 "uuid": "ba9e2cf0-2ad7-420b-9ce4-bebf95b0e55b", 00:23:43.557 "assigned_rate_limits": { 00:23:43.557 "rw_ios_per_sec": 0, 00:23:43.557 "rw_mbytes_per_sec": 0, 00:23:43.557 "r_mbytes_per_sec": 0, 00:23:43.557 "w_mbytes_per_sec": 0 00:23:43.557 }, 00:23:43.557 "claimed": false, 00:23:43.557 "zoned": false, 00:23:43.557 "supported_io_types": { 00:23:43.557 "read": true, 00:23:43.557 "write": true, 00:23:43.557 "unmap": true, 00:23:43.557 "flush": true, 00:23:43.557 "reset": true, 00:23:43.557 "nvme_admin": false, 00:23:43.557 "nvme_io": false, 00:23:43.557 "nvme_io_md": false, 00:23:43.557 "write_zeroes": true, 00:23:43.557 "zcopy": true, 00:23:43.557 "get_zone_info": false, 00:23:43.557 "zone_management": false, 00:23:43.557 "zone_append": false, 00:23:43.557 "compare": false, 00:23:43.557 "compare_and_write": false, 00:23:43.557 "abort": true, 00:23:43.557 "seek_hole": false, 00:23:43.557 "seek_data": false, 00:23:43.557 "copy": true, 00:23:43.557 "nvme_iov_md": false 00:23:43.557 }, 00:23:43.557 "memory_domains": [ 00:23:43.557 { 00:23:43.557 "dma_device_id": "system", 00:23:43.557 "dma_device_type": 1 00:23:43.557 }, 00:23:43.557 { 00:23:43.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:43.557 "dma_device_type": 2 00:23:43.557 } 00:23:43.557 ], 00:23:43.557 "driver_specific": {} 00:23:43.557 } 00:23:43.557 ] 00:23:43.557 04:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:43.557 04:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:43.557 04:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:43.557 04:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:43.815 BaseBdev3 00:23:43.815 04:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:23:43.815 04:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:43.816 04:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:43.816 04:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:43.816 04:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:43.816 04:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:43.816 04:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:44.074 04:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:44.332 [ 00:23:44.332 { 00:23:44.332 "name": "BaseBdev3", 00:23:44.332 "aliases": [ 00:23:44.332 "4b25e60e-c1a2-4d13-8aba-185994af22c9" 00:23:44.332 ], 00:23:44.332 "product_name": "Malloc disk", 00:23:44.332 "block_size": 512, 00:23:44.332 "num_blocks": 65536, 00:23:44.332 "uuid": "4b25e60e-c1a2-4d13-8aba-185994af22c9", 00:23:44.332 "assigned_rate_limits": { 00:23:44.332 "rw_ios_per_sec": 0, 00:23:44.332 "rw_mbytes_per_sec": 0, 00:23:44.332 "r_mbytes_per_sec": 0, 00:23:44.332 "w_mbytes_per_sec": 0 00:23:44.332 }, 00:23:44.332 "claimed": false, 00:23:44.332 "zoned": false, 00:23:44.332 "supported_io_types": { 00:23:44.332 "read": true, 00:23:44.332 "write": true, 00:23:44.332 "unmap": true, 00:23:44.332 "flush": true, 00:23:44.332 "reset": true, 00:23:44.332 "nvme_admin": false, 00:23:44.332 "nvme_io": false, 00:23:44.332 "nvme_io_md": false, 00:23:44.332 "write_zeroes": true, 00:23:44.332 "zcopy": true, 00:23:44.332 "get_zone_info": false, 00:23:44.332 "zone_management": false, 00:23:44.332 "zone_append": false, 00:23:44.332 "compare": false, 00:23:44.332 "compare_and_write": false, 00:23:44.332 "abort": true, 00:23:44.332 "seek_hole": false, 00:23:44.332 "seek_data": false, 00:23:44.332 "copy": true, 00:23:44.332 "nvme_iov_md": false 00:23:44.332 }, 00:23:44.332 "memory_domains": [ 00:23:44.332 { 00:23:44.332 "dma_device_id": "system", 00:23:44.332 "dma_device_type": 1 00:23:44.332 }, 00:23:44.332 { 00:23:44.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.332 "dma_device_type": 2 00:23:44.332 } 00:23:44.332 ], 00:23:44.332 "driver_specific": {} 00:23:44.332 } 00:23:44.332 ] 00:23:44.332 04:18:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:44.332 04:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:44.332 04:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:44.332 04:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:44.591 BaseBdev4 00:23:44.591 04:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:23:44.591 04:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:23:44.591 04:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:44.591 04:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:44.591 04:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:44.591 04:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:44.591 04:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:44.850 04:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:45.108 [ 00:23:45.108 { 00:23:45.108 "name": "BaseBdev4", 00:23:45.108 "aliases": [ 00:23:45.108 "0fbbd443-8cdf-45de-b064-8b6cc649fcc7" 00:23:45.108 ], 00:23:45.108 "product_name": "Malloc disk", 00:23:45.108 "block_size": 512, 00:23:45.108 "num_blocks": 65536, 00:23:45.108 "uuid": "0fbbd443-8cdf-45de-b064-8b6cc649fcc7", 00:23:45.108 "assigned_rate_limits": { 00:23:45.108 "rw_ios_per_sec": 0, 00:23:45.108 "rw_mbytes_per_sec": 0, 00:23:45.108 "r_mbytes_per_sec": 0, 00:23:45.108 "w_mbytes_per_sec": 0 00:23:45.108 }, 00:23:45.108 "claimed": false, 00:23:45.108 "zoned": false, 00:23:45.108 "supported_io_types": { 00:23:45.108 "read": true, 00:23:45.108 "write": true, 00:23:45.108 "unmap": true, 00:23:45.108 "flush": true, 00:23:45.108 "reset": true, 00:23:45.108 "nvme_admin": false, 00:23:45.108 "nvme_io": false, 00:23:45.108 "nvme_io_md": false, 00:23:45.108 "write_zeroes": true, 00:23:45.108 "zcopy": true, 00:23:45.108 "get_zone_info": false, 00:23:45.108 "zone_management": false, 00:23:45.108 "zone_append": false, 00:23:45.108 "compare": false, 00:23:45.108 "compare_and_write": false, 00:23:45.108 "abort": true, 00:23:45.108 "seek_hole": false, 00:23:45.108 "seek_data": false, 00:23:45.108 "copy": true, 00:23:45.108 "nvme_iov_md": false 00:23:45.108 }, 00:23:45.108 "memory_domains": [ 00:23:45.108 { 00:23:45.108 "dma_device_id": "system", 00:23:45.108 "dma_device_type": 1 00:23:45.108 }, 00:23:45.108 { 00:23:45.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.108 "dma_device_type": 2 00:23:45.108 } 00:23:45.108 ], 00:23:45.108 "driver_specific": {} 00:23:45.108 } 00:23:45.108 ] 00:23:45.108 04:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:45.108 04:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:45.108 04:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:45.108 04:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:45.367 [2024-07-23 04:18:53.923826] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:45.367 [2024-07-23 04:18:53.923872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:45.367 [2024-07-23 04:18:53.923904] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:45.367 [2024-07-23 04:18:53.926267] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:45.367 [2024-07-23 04:18:53.926328] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:45.367 04:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:45.367 04:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:45.367 04:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:45.367 04:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:45.367 04:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:45.367 04:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:45.367 04:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:45.367 04:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:45.367 04:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:45.367 04:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:45.367 04:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.367 04:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:45.626 04:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:45.626 "name": "Existed_Raid", 00:23:45.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.626 "strip_size_kb": 64, 00:23:45.626 "state": "configuring", 00:23:45.626 "raid_level": "concat", 00:23:45.626 "superblock": false, 00:23:45.626 "num_base_bdevs": 4, 00:23:45.626 "num_base_bdevs_discovered": 3, 00:23:45.626 "num_base_bdevs_operational": 4, 00:23:45.626 "base_bdevs_list": [ 00:23:45.626 { 00:23:45.626 "name": "BaseBdev1", 00:23:45.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.626 "is_configured": false, 00:23:45.626 "data_offset": 0, 00:23:45.626 "data_size": 0 00:23:45.626 }, 00:23:45.626 { 00:23:45.626 "name": "BaseBdev2", 00:23:45.626 "uuid": "ba9e2cf0-2ad7-420b-9ce4-bebf95b0e55b", 00:23:45.626 "is_configured": true, 00:23:45.626 "data_offset": 0, 00:23:45.626 "data_size": 65536 00:23:45.626 }, 00:23:45.626 { 00:23:45.626 "name": "BaseBdev3", 00:23:45.626 "uuid": "4b25e60e-c1a2-4d13-8aba-185994af22c9", 00:23:45.626 "is_configured": true, 00:23:45.626 "data_offset": 0, 00:23:45.626 "data_size": 65536 00:23:45.626 }, 00:23:45.626 { 00:23:45.626 "name": "BaseBdev4", 00:23:45.626 "uuid": "0fbbd443-8cdf-45de-b064-8b6cc649fcc7", 00:23:45.626 "is_configured": true, 00:23:45.626 "data_offset": 0, 00:23:45.626 "data_size": 65536 00:23:45.626 } 00:23:45.626 ] 00:23:45.626 }' 00:23:45.626 04:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:45.626 04:18:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.227 04:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:46.227 [2024-07-23 04:18:54.950556] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:46.227 04:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:46.227 04:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:46.227 04:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:46.227 04:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:46.227 04:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:46.227 04:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:46.227 04:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:46.227 04:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:46.227 04:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:46.227 04:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:46.227 04:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.227 04:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:46.486 04:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:46.486 "name": "Existed_Raid", 00:23:46.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.486 "strip_size_kb": 64, 00:23:46.486 "state": "configuring", 00:23:46.486 "raid_level": "concat", 00:23:46.486 "superblock": false, 00:23:46.486 "num_base_bdevs": 4, 00:23:46.486 "num_base_bdevs_discovered": 2, 00:23:46.486 "num_base_bdevs_operational": 4, 00:23:46.486 "base_bdevs_list": [ 00:23:46.486 { 00:23:46.486 "name": "BaseBdev1", 00:23:46.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.486 "is_configured": false, 00:23:46.486 "data_offset": 0, 00:23:46.486 "data_size": 0 00:23:46.486 }, 00:23:46.486 { 00:23:46.486 "name": null, 00:23:46.486 "uuid": "ba9e2cf0-2ad7-420b-9ce4-bebf95b0e55b", 00:23:46.486 "is_configured": false, 00:23:46.486 "data_offset": 0, 00:23:46.486 "data_size": 65536 00:23:46.486 }, 00:23:46.486 { 00:23:46.486 "name": "BaseBdev3", 00:23:46.486 "uuid": "4b25e60e-c1a2-4d13-8aba-185994af22c9", 00:23:46.486 "is_configured": true, 00:23:46.486 "data_offset": 0, 00:23:46.486 "data_size": 65536 00:23:46.486 }, 00:23:46.486 { 00:23:46.486 "name": "BaseBdev4", 00:23:46.486 "uuid": "0fbbd443-8cdf-45de-b064-8b6cc649fcc7", 00:23:46.486 "is_configured": true, 00:23:46.486 "data_offset": 0, 00:23:46.486 "data_size": 65536 00:23:46.486 } 00:23:46.486 ] 00:23:46.486 }' 00:23:46.486 04:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:46.486 04:18:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.052 04:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.052 04:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:47.311 04:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:23:47.311 04:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:47.569 [2024-07-23 04:18:56.237061] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:47.569 BaseBdev1 00:23:47.569 04:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:23:47.569 04:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:47.569 04:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:47.569 04:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:47.569 04:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:47.569 04:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:47.569 04:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:47.828 04:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:47.828 [ 00:23:47.828 { 00:23:47.828 "name": "BaseBdev1", 00:23:47.828 "aliases": [ 00:23:47.828 "fbddfdfd-8e45-4fe0-84f3-ab2e18d1705f" 00:23:47.828 ], 00:23:47.828 "product_name": "Malloc disk", 00:23:47.828 "block_size": 512, 00:23:47.828 "num_blocks": 65536, 00:23:47.828 "uuid": "fbddfdfd-8e45-4fe0-84f3-ab2e18d1705f", 00:23:47.828 "assigned_rate_limits": { 00:23:47.828 "rw_ios_per_sec": 0, 00:23:47.828 "rw_mbytes_per_sec": 0, 00:23:47.828 "r_mbytes_per_sec": 0, 00:23:47.828 "w_mbytes_per_sec": 0 00:23:47.828 }, 00:23:47.828 "claimed": true, 00:23:47.828 "claim_type": "exclusive_write", 00:23:47.828 "zoned": false, 00:23:47.828 "supported_io_types": { 00:23:47.828 "read": true, 00:23:47.828 "write": true, 00:23:47.828 "unmap": true, 00:23:47.828 "flush": true, 00:23:47.828 "reset": true, 00:23:47.828 "nvme_admin": false, 00:23:47.828 "nvme_io": false, 00:23:47.828 "nvme_io_md": false, 00:23:47.828 "write_zeroes": true, 00:23:47.828 "zcopy": true, 00:23:47.828 "get_zone_info": false, 00:23:47.828 "zone_management": false, 00:23:47.828 "zone_append": false, 00:23:47.828 "compare": false, 00:23:47.828 "compare_and_write": false, 00:23:47.828 "abort": true, 00:23:47.828 "seek_hole": false, 00:23:47.828 "seek_data": false, 00:23:47.828 "copy": true, 00:23:47.828 "nvme_iov_md": false 00:23:47.828 }, 00:23:47.828 "memory_domains": [ 00:23:47.828 { 00:23:47.828 "dma_device_id": "system", 00:23:47.828 "dma_device_type": 1 00:23:47.828 }, 00:23:47.828 { 00:23:47.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:47.828 "dma_device_type": 2 00:23:47.828 } 00:23:47.828 ], 00:23:47.828 "driver_specific": {} 00:23:47.828 } 00:23:47.828 ] 00:23:47.828 04:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:47.828 04:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:47.828 04:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:47.828 04:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:47.828 04:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:47.828 04:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:47.828 04:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:47.828 04:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:47.828 04:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:47.828 04:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:47.828 04:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:47.828 04:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.828 04:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:48.395 04:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:48.395 "name": "Existed_Raid", 00:23:48.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.395 "strip_size_kb": 64, 00:23:48.395 "state": "configuring", 00:23:48.395 "raid_level": "concat", 00:23:48.395 "superblock": false, 00:23:48.395 "num_base_bdevs": 4, 00:23:48.395 "num_base_bdevs_discovered": 3, 00:23:48.395 "num_base_bdevs_operational": 4, 00:23:48.395 "base_bdevs_list": [ 00:23:48.395 { 00:23:48.395 "name": "BaseBdev1", 00:23:48.395 "uuid": "fbddfdfd-8e45-4fe0-84f3-ab2e18d1705f", 00:23:48.395 "is_configured": true, 00:23:48.395 "data_offset": 0, 00:23:48.395 "data_size": 65536 00:23:48.395 }, 00:23:48.395 { 00:23:48.395 "name": null, 00:23:48.395 "uuid": "ba9e2cf0-2ad7-420b-9ce4-bebf95b0e55b", 00:23:48.395 "is_configured": false, 00:23:48.395 "data_offset": 0, 00:23:48.395 "data_size": 65536 00:23:48.395 }, 00:23:48.395 { 00:23:48.395 "name": "BaseBdev3", 00:23:48.395 "uuid": "4b25e60e-c1a2-4d13-8aba-185994af22c9", 00:23:48.395 "is_configured": true, 00:23:48.395 "data_offset": 0, 00:23:48.395 "data_size": 65536 00:23:48.395 }, 00:23:48.395 { 00:23:48.395 "name": "BaseBdev4", 00:23:48.395 "uuid": "0fbbd443-8cdf-45de-b064-8b6cc649fcc7", 00:23:48.395 "is_configured": true, 00:23:48.395 "data_offset": 0, 00:23:48.395 "data_size": 65536 00:23:48.395 } 00:23:48.395 ] 00:23:48.395 }' 00:23:48.395 04:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:48.395 04:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.962 04:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.962 04:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:49.221 04:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:49.221 04:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:49.788 [2024-07-23 04:18:58.346882] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:49.788 04:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:49.788 04:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:49.788 04:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:49.788 04:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:49.788 04:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:49.788 04:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:49.788 04:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:49.788 04:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:49.788 04:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:49.788 04:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:49.788 04:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.788 04:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:50.047 04:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:50.047 "name": "Existed_Raid", 00:23:50.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.047 "strip_size_kb": 64, 00:23:50.047 "state": "configuring", 00:23:50.047 "raid_level": "concat", 00:23:50.047 "superblock": false, 00:23:50.047 "num_base_bdevs": 4, 00:23:50.047 "num_base_bdevs_discovered": 2, 00:23:50.047 "num_base_bdevs_operational": 4, 00:23:50.047 "base_bdevs_list": [ 00:23:50.047 { 00:23:50.047 "name": "BaseBdev1", 00:23:50.047 "uuid": "fbddfdfd-8e45-4fe0-84f3-ab2e18d1705f", 00:23:50.047 "is_configured": true, 00:23:50.047 "data_offset": 0, 00:23:50.047 "data_size": 65536 00:23:50.047 }, 00:23:50.047 { 00:23:50.047 "name": null, 00:23:50.047 "uuid": "ba9e2cf0-2ad7-420b-9ce4-bebf95b0e55b", 00:23:50.047 "is_configured": false, 00:23:50.047 "data_offset": 0, 00:23:50.047 "data_size": 65536 00:23:50.047 }, 00:23:50.047 { 00:23:50.047 "name": null, 00:23:50.047 "uuid": "4b25e60e-c1a2-4d13-8aba-185994af22c9", 00:23:50.047 "is_configured": false, 00:23:50.047 "data_offset": 0, 00:23:50.047 "data_size": 65536 00:23:50.047 }, 00:23:50.047 { 00:23:50.047 "name": "BaseBdev4", 00:23:50.047 "uuid": "0fbbd443-8cdf-45de-b064-8b6cc649fcc7", 00:23:50.047 "is_configured": true, 00:23:50.047 "data_offset": 0, 00:23:50.047 "data_size": 65536 00:23:50.047 } 00:23:50.047 ] 00:23:50.047 }' 00:23:50.047 04:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:50.047 04:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.613 04:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.613 04:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:50.613 04:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:50.613 04:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:50.871 [2024-07-23 04:18:59.518037] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:50.871 04:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:50.871 04:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:50.871 04:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:50.871 04:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:50.871 04:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:50.871 04:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:50.871 04:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:50.871 04:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:50.871 04:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:50.871 04:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:50.871 04:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.871 04:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:51.129 04:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:51.129 "name": "Existed_Raid", 00:23:51.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.129 "strip_size_kb": 64, 00:23:51.129 "state": "configuring", 00:23:51.129 "raid_level": "concat", 00:23:51.129 "superblock": false, 00:23:51.129 "num_base_bdevs": 4, 00:23:51.129 "num_base_bdevs_discovered": 3, 00:23:51.129 "num_base_bdevs_operational": 4, 00:23:51.129 "base_bdevs_list": [ 00:23:51.129 { 00:23:51.129 "name": "BaseBdev1", 00:23:51.129 "uuid": "fbddfdfd-8e45-4fe0-84f3-ab2e18d1705f", 00:23:51.129 "is_configured": true, 00:23:51.129 "data_offset": 0, 00:23:51.129 "data_size": 65536 00:23:51.129 }, 00:23:51.129 { 00:23:51.129 "name": null, 00:23:51.129 "uuid": "ba9e2cf0-2ad7-420b-9ce4-bebf95b0e55b", 00:23:51.129 "is_configured": false, 00:23:51.129 "data_offset": 0, 00:23:51.129 "data_size": 65536 00:23:51.129 }, 00:23:51.129 { 00:23:51.129 "name": "BaseBdev3", 00:23:51.129 "uuid": "4b25e60e-c1a2-4d13-8aba-185994af22c9", 00:23:51.129 "is_configured": true, 00:23:51.129 "data_offset": 0, 00:23:51.129 "data_size": 65536 00:23:51.129 }, 00:23:51.129 { 00:23:51.129 "name": "BaseBdev4", 00:23:51.129 "uuid": "0fbbd443-8cdf-45de-b064-8b6cc649fcc7", 00:23:51.129 "is_configured": true, 00:23:51.129 "data_offset": 0, 00:23:51.129 "data_size": 65536 00:23:51.129 } 00:23:51.129 ] 00:23:51.129 }' 00:23:51.129 04:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:51.129 04:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.695 04:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.695 04:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:51.953 04:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:23:51.953 04:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:51.953 [2024-07-23 04:19:00.733362] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:52.211 04:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:52.211 04:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:52.211 04:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:52.211 04:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:52.211 04:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:52.211 04:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:52.211 04:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:52.211 04:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:52.211 04:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:52.211 04:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:52.211 04:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.211 04:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:52.779 04:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:52.779 "name": "Existed_Raid", 00:23:52.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.779 "strip_size_kb": 64, 00:23:52.779 "state": "configuring", 00:23:52.779 "raid_level": "concat", 00:23:52.779 "superblock": false, 00:23:52.779 "num_base_bdevs": 4, 00:23:52.779 "num_base_bdevs_discovered": 2, 00:23:52.779 "num_base_bdevs_operational": 4, 00:23:52.779 "base_bdevs_list": [ 00:23:52.779 { 00:23:52.779 "name": null, 00:23:52.779 "uuid": "fbddfdfd-8e45-4fe0-84f3-ab2e18d1705f", 00:23:52.779 "is_configured": false, 00:23:52.779 "data_offset": 0, 00:23:52.779 "data_size": 65536 00:23:52.779 }, 00:23:52.779 { 00:23:52.779 "name": null, 00:23:52.779 "uuid": "ba9e2cf0-2ad7-420b-9ce4-bebf95b0e55b", 00:23:52.779 "is_configured": false, 00:23:52.779 "data_offset": 0, 00:23:52.779 "data_size": 65536 00:23:52.779 }, 00:23:52.779 { 00:23:52.779 "name": "BaseBdev3", 00:23:52.779 "uuid": "4b25e60e-c1a2-4d13-8aba-185994af22c9", 00:23:52.779 "is_configured": true, 00:23:52.779 "data_offset": 0, 00:23:52.779 "data_size": 65536 00:23:52.779 }, 00:23:52.779 { 00:23:52.779 "name": "BaseBdev4", 00:23:52.779 "uuid": "0fbbd443-8cdf-45de-b064-8b6cc649fcc7", 00:23:52.779 "is_configured": true, 00:23:52.779 "data_offset": 0, 00:23:52.779 "data_size": 65536 00:23:52.779 } 00:23:52.779 ] 00:23:52.779 }' 00:23:52.779 04:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:52.779 04:19:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.345 04:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.345 04:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:53.603 04:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:23:53.603 04:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:54.169 [2024-07-23 04:19:02.657185] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:54.169 04:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:54.169 04:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:54.169 04:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:54.169 04:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:54.169 04:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:54.169 04:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:54.169 04:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:54.169 04:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:54.169 04:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:54.169 04:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:54.169 04:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.169 04:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:54.169 04:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:54.169 "name": "Existed_Raid", 00:23:54.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.169 "strip_size_kb": 64, 00:23:54.169 "state": "configuring", 00:23:54.169 "raid_level": "concat", 00:23:54.169 "superblock": false, 00:23:54.169 "num_base_bdevs": 4, 00:23:54.169 "num_base_bdevs_discovered": 3, 00:23:54.169 "num_base_bdevs_operational": 4, 00:23:54.169 "base_bdevs_list": [ 00:23:54.169 { 00:23:54.169 "name": null, 00:23:54.169 "uuid": "fbddfdfd-8e45-4fe0-84f3-ab2e18d1705f", 00:23:54.169 "is_configured": false, 00:23:54.169 "data_offset": 0, 00:23:54.169 "data_size": 65536 00:23:54.169 }, 00:23:54.169 { 00:23:54.169 "name": "BaseBdev2", 00:23:54.169 "uuid": "ba9e2cf0-2ad7-420b-9ce4-bebf95b0e55b", 00:23:54.169 "is_configured": true, 00:23:54.169 "data_offset": 0, 00:23:54.169 "data_size": 65536 00:23:54.169 }, 00:23:54.169 { 00:23:54.169 "name": "BaseBdev3", 00:23:54.169 "uuid": "4b25e60e-c1a2-4d13-8aba-185994af22c9", 00:23:54.169 "is_configured": true, 00:23:54.169 "data_offset": 0, 00:23:54.169 "data_size": 65536 00:23:54.169 }, 00:23:54.169 { 00:23:54.169 "name": "BaseBdev4", 00:23:54.169 "uuid": "0fbbd443-8cdf-45de-b064-8b6cc649fcc7", 00:23:54.169 "is_configured": true, 00:23:54.169 "data_offset": 0, 00:23:54.169 "data_size": 65536 00:23:54.169 } 00:23:54.169 ] 00:23:54.169 }' 00:23:54.169 04:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:54.169 04:19:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.103 04:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:55.103 04:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.361 04:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:23:55.361 04:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.361 04:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:55.619 04:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u fbddfdfd-8e45-4fe0-84f3-ab2e18d1705f 00:23:55.878 [2024-07-23 04:19:04.473689] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:55.878 [2024-07-23 04:19:04.473739] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:23:55.878 [2024-07-23 04:19:04.473752] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:23:55.878 [2024-07-23 04:19:04.474080] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:23:55.878 [2024-07-23 04:19:04.474321] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:23:55.878 [2024-07-23 04:19:04.474339] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000042080 00:23:55.878 [2024-07-23 04:19:04.474636] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:55.878 NewBaseBdev 00:23:55.878 04:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:23:55.878 04:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:23:55.878 04:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:55.878 04:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:55.878 04:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:55.878 04:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:55.878 04:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:56.136 04:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:56.136 [ 00:23:56.136 { 00:23:56.136 "name": "NewBaseBdev", 00:23:56.136 "aliases": [ 00:23:56.136 "fbddfdfd-8e45-4fe0-84f3-ab2e18d1705f" 00:23:56.136 ], 00:23:56.136 "product_name": "Malloc disk", 00:23:56.136 "block_size": 512, 00:23:56.136 "num_blocks": 65536, 00:23:56.136 "uuid": "fbddfdfd-8e45-4fe0-84f3-ab2e18d1705f", 00:23:56.136 "assigned_rate_limits": { 00:23:56.136 "rw_ios_per_sec": 0, 00:23:56.136 "rw_mbytes_per_sec": 0, 00:23:56.136 "r_mbytes_per_sec": 0, 00:23:56.136 "w_mbytes_per_sec": 0 00:23:56.136 }, 00:23:56.136 "claimed": true, 00:23:56.136 "claim_type": "exclusive_write", 00:23:56.136 "zoned": false, 00:23:56.136 "supported_io_types": { 00:23:56.136 "read": true, 00:23:56.136 "write": true, 00:23:56.136 "unmap": true, 00:23:56.136 "flush": true, 00:23:56.136 "reset": true, 00:23:56.136 "nvme_admin": false, 00:23:56.136 "nvme_io": false, 00:23:56.136 "nvme_io_md": false, 00:23:56.136 "write_zeroes": true, 00:23:56.136 "zcopy": true, 00:23:56.136 "get_zone_info": false, 00:23:56.136 "zone_management": false, 00:23:56.136 "zone_append": false, 00:23:56.136 "compare": false, 00:23:56.136 "compare_and_write": false, 00:23:56.136 "abort": true, 00:23:56.136 "seek_hole": false, 00:23:56.136 "seek_data": false, 00:23:56.136 "copy": true, 00:23:56.136 "nvme_iov_md": false 00:23:56.136 }, 00:23:56.136 "memory_domains": [ 00:23:56.136 { 00:23:56.136 "dma_device_id": "system", 00:23:56.136 "dma_device_type": 1 00:23:56.136 }, 00:23:56.136 { 00:23:56.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.136 "dma_device_type": 2 00:23:56.136 } 00:23:56.136 ], 00:23:56.136 "driver_specific": {} 00:23:56.136 } 00:23:56.136 ] 00:23:56.395 04:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:56.395 04:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:23:56.395 04:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:56.395 04:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:56.395 04:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:56.395 04:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:56.395 04:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:56.395 04:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:56.395 04:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:56.395 04:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:56.395 04:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:56.395 04:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.395 04:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:56.395 04:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:56.395 "name": "Existed_Raid", 00:23:56.395 "uuid": "b9898cba-c7c7-49ce-987f-b414f756e01b", 00:23:56.395 "strip_size_kb": 64, 00:23:56.395 "state": "online", 00:23:56.395 "raid_level": "concat", 00:23:56.395 "superblock": false, 00:23:56.395 "num_base_bdevs": 4, 00:23:56.395 "num_base_bdevs_discovered": 4, 00:23:56.395 "num_base_bdevs_operational": 4, 00:23:56.395 "base_bdevs_list": [ 00:23:56.395 { 00:23:56.395 "name": "NewBaseBdev", 00:23:56.395 "uuid": "fbddfdfd-8e45-4fe0-84f3-ab2e18d1705f", 00:23:56.395 "is_configured": true, 00:23:56.395 "data_offset": 0, 00:23:56.395 "data_size": 65536 00:23:56.395 }, 00:23:56.395 { 00:23:56.395 "name": "BaseBdev2", 00:23:56.395 "uuid": "ba9e2cf0-2ad7-420b-9ce4-bebf95b0e55b", 00:23:56.395 "is_configured": true, 00:23:56.395 "data_offset": 0, 00:23:56.395 "data_size": 65536 00:23:56.395 }, 00:23:56.395 { 00:23:56.395 "name": "BaseBdev3", 00:23:56.395 "uuid": "4b25e60e-c1a2-4d13-8aba-185994af22c9", 00:23:56.395 "is_configured": true, 00:23:56.395 "data_offset": 0, 00:23:56.395 "data_size": 65536 00:23:56.395 }, 00:23:56.395 { 00:23:56.395 "name": "BaseBdev4", 00:23:56.395 "uuid": "0fbbd443-8cdf-45de-b064-8b6cc649fcc7", 00:23:56.395 "is_configured": true, 00:23:56.395 "data_offset": 0, 00:23:56.395 "data_size": 65536 00:23:56.395 } 00:23:56.395 ] 00:23:56.395 }' 00:23:56.395 04:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:56.395 04:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.328 04:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:23:57.328 04:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:57.328 04:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:57.328 04:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:57.328 04:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:57.328 04:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:57.328 04:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:57.328 04:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:57.328 [2024-07-23 04:19:05.962239] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:57.328 04:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:57.328 "name": "Existed_Raid", 00:23:57.328 "aliases": [ 00:23:57.328 "b9898cba-c7c7-49ce-987f-b414f756e01b" 00:23:57.328 ], 00:23:57.328 "product_name": "Raid Volume", 00:23:57.328 "block_size": 512, 00:23:57.328 "num_blocks": 262144, 00:23:57.328 "uuid": "b9898cba-c7c7-49ce-987f-b414f756e01b", 00:23:57.328 "assigned_rate_limits": { 00:23:57.328 "rw_ios_per_sec": 0, 00:23:57.328 "rw_mbytes_per_sec": 0, 00:23:57.328 "r_mbytes_per_sec": 0, 00:23:57.328 "w_mbytes_per_sec": 0 00:23:57.329 }, 00:23:57.329 "claimed": false, 00:23:57.329 "zoned": false, 00:23:57.329 "supported_io_types": { 00:23:57.329 "read": true, 00:23:57.329 "write": true, 00:23:57.329 "unmap": true, 00:23:57.329 "flush": true, 00:23:57.329 "reset": true, 00:23:57.329 "nvme_admin": false, 00:23:57.329 "nvme_io": false, 00:23:57.329 "nvme_io_md": false, 00:23:57.329 "write_zeroes": true, 00:23:57.329 "zcopy": false, 00:23:57.329 "get_zone_info": false, 00:23:57.329 "zone_management": false, 00:23:57.329 "zone_append": false, 00:23:57.329 "compare": false, 00:23:57.329 "compare_and_write": false, 00:23:57.329 "abort": false, 00:23:57.329 "seek_hole": false, 00:23:57.329 "seek_data": false, 00:23:57.329 "copy": false, 00:23:57.329 "nvme_iov_md": false 00:23:57.329 }, 00:23:57.329 "memory_domains": [ 00:23:57.329 { 00:23:57.329 "dma_device_id": "system", 00:23:57.329 "dma_device_type": 1 00:23:57.329 }, 00:23:57.329 { 00:23:57.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.329 "dma_device_type": 2 00:23:57.329 }, 00:23:57.329 { 00:23:57.329 "dma_device_id": "system", 00:23:57.329 "dma_device_type": 1 00:23:57.329 }, 00:23:57.329 { 00:23:57.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.329 "dma_device_type": 2 00:23:57.329 }, 00:23:57.329 { 00:23:57.329 "dma_device_id": "system", 00:23:57.329 "dma_device_type": 1 00:23:57.329 }, 00:23:57.329 { 00:23:57.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.329 "dma_device_type": 2 00:23:57.329 }, 00:23:57.329 { 00:23:57.329 "dma_device_id": "system", 00:23:57.329 "dma_device_type": 1 00:23:57.329 }, 00:23:57.329 { 00:23:57.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.329 "dma_device_type": 2 00:23:57.329 } 00:23:57.329 ], 00:23:57.329 "driver_specific": { 00:23:57.329 "raid": { 00:23:57.329 "uuid": "b9898cba-c7c7-49ce-987f-b414f756e01b", 00:23:57.329 "strip_size_kb": 64, 00:23:57.329 "state": "online", 00:23:57.329 "raid_level": "concat", 00:23:57.329 "superblock": false, 00:23:57.329 "num_base_bdevs": 4, 00:23:57.329 "num_base_bdevs_discovered": 4, 00:23:57.329 "num_base_bdevs_operational": 4, 00:23:57.329 "base_bdevs_list": [ 00:23:57.329 { 00:23:57.329 "name": "NewBaseBdev", 00:23:57.329 "uuid": "fbddfdfd-8e45-4fe0-84f3-ab2e18d1705f", 00:23:57.329 "is_configured": true, 00:23:57.329 "data_offset": 0, 00:23:57.329 "data_size": 65536 00:23:57.329 }, 00:23:57.329 { 00:23:57.329 "name": "BaseBdev2", 00:23:57.329 "uuid": "ba9e2cf0-2ad7-420b-9ce4-bebf95b0e55b", 00:23:57.329 "is_configured": true, 00:23:57.329 "data_offset": 0, 00:23:57.329 "data_size": 65536 00:23:57.329 }, 00:23:57.329 { 00:23:57.329 "name": "BaseBdev3", 00:23:57.329 "uuid": "4b25e60e-c1a2-4d13-8aba-185994af22c9", 00:23:57.329 "is_configured": true, 00:23:57.329 "data_offset": 0, 00:23:57.329 "data_size": 65536 00:23:57.329 }, 00:23:57.329 { 00:23:57.329 "name": "BaseBdev4", 00:23:57.329 "uuid": "0fbbd443-8cdf-45de-b064-8b6cc649fcc7", 00:23:57.329 "is_configured": true, 00:23:57.329 "data_offset": 0, 00:23:57.329 "data_size": 65536 00:23:57.329 } 00:23:57.329 ] 00:23:57.329 } 00:23:57.329 } 00:23:57.329 }' 00:23:57.329 04:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:57.329 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:23:57.329 BaseBdev2 00:23:57.329 BaseBdev3 00:23:57.329 BaseBdev4' 00:23:57.329 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:57.329 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:57.329 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:57.587 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:57.587 "name": "NewBaseBdev", 00:23:57.587 "aliases": [ 00:23:57.587 "fbddfdfd-8e45-4fe0-84f3-ab2e18d1705f" 00:23:57.587 ], 00:23:57.587 "product_name": "Malloc disk", 00:23:57.587 "block_size": 512, 00:23:57.587 "num_blocks": 65536, 00:23:57.587 "uuid": "fbddfdfd-8e45-4fe0-84f3-ab2e18d1705f", 00:23:57.587 "assigned_rate_limits": { 00:23:57.587 "rw_ios_per_sec": 0, 00:23:57.587 "rw_mbytes_per_sec": 0, 00:23:57.587 "r_mbytes_per_sec": 0, 00:23:57.587 "w_mbytes_per_sec": 0 00:23:57.587 }, 00:23:57.587 "claimed": true, 00:23:57.587 "claim_type": "exclusive_write", 00:23:57.587 "zoned": false, 00:23:57.587 "supported_io_types": { 00:23:57.587 "read": true, 00:23:57.587 "write": true, 00:23:57.587 "unmap": true, 00:23:57.587 "flush": true, 00:23:57.587 "reset": true, 00:23:57.587 "nvme_admin": false, 00:23:57.587 "nvme_io": false, 00:23:57.587 "nvme_io_md": false, 00:23:57.587 "write_zeroes": true, 00:23:57.587 "zcopy": true, 00:23:57.587 "get_zone_info": false, 00:23:57.587 "zone_management": false, 00:23:57.587 "zone_append": false, 00:23:57.587 "compare": false, 00:23:57.587 "compare_and_write": false, 00:23:57.587 "abort": true, 00:23:57.587 "seek_hole": false, 00:23:57.587 "seek_data": false, 00:23:57.587 "copy": true, 00:23:57.587 "nvme_iov_md": false 00:23:57.588 }, 00:23:57.588 "memory_domains": [ 00:23:57.588 { 00:23:57.588 "dma_device_id": "system", 00:23:57.588 "dma_device_type": 1 00:23:57.588 }, 00:23:57.588 { 00:23:57.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.588 "dma_device_type": 2 00:23:57.588 } 00:23:57.588 ], 00:23:57.588 "driver_specific": {} 00:23:57.588 }' 00:23:57.588 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:57.588 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:57.588 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:57.588 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.846 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.846 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:57.846 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.846 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.846 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:57.846 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:57.846 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:57.846 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:57.846 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:57.846 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:57.846 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:58.104 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:58.104 "name": "BaseBdev2", 00:23:58.104 "aliases": [ 00:23:58.104 "ba9e2cf0-2ad7-420b-9ce4-bebf95b0e55b" 00:23:58.104 ], 00:23:58.104 "product_name": "Malloc disk", 00:23:58.104 "block_size": 512, 00:23:58.104 "num_blocks": 65536, 00:23:58.104 "uuid": "ba9e2cf0-2ad7-420b-9ce4-bebf95b0e55b", 00:23:58.104 "assigned_rate_limits": { 00:23:58.104 "rw_ios_per_sec": 0, 00:23:58.104 "rw_mbytes_per_sec": 0, 00:23:58.104 "r_mbytes_per_sec": 0, 00:23:58.104 "w_mbytes_per_sec": 0 00:23:58.104 }, 00:23:58.104 "claimed": true, 00:23:58.104 "claim_type": "exclusive_write", 00:23:58.104 "zoned": false, 00:23:58.104 "supported_io_types": { 00:23:58.104 "read": true, 00:23:58.104 "write": true, 00:23:58.104 "unmap": true, 00:23:58.104 "flush": true, 00:23:58.104 "reset": true, 00:23:58.104 "nvme_admin": false, 00:23:58.104 "nvme_io": false, 00:23:58.104 "nvme_io_md": false, 00:23:58.104 "write_zeroes": true, 00:23:58.104 "zcopy": true, 00:23:58.104 "get_zone_info": false, 00:23:58.104 "zone_management": false, 00:23:58.104 "zone_append": false, 00:23:58.104 "compare": false, 00:23:58.104 "compare_and_write": false, 00:23:58.104 "abort": true, 00:23:58.104 "seek_hole": false, 00:23:58.104 "seek_data": false, 00:23:58.104 "copy": true, 00:23:58.104 "nvme_iov_md": false 00:23:58.104 }, 00:23:58.104 "memory_domains": [ 00:23:58.104 { 00:23:58.104 "dma_device_id": "system", 00:23:58.104 "dma_device_type": 1 00:23:58.104 }, 00:23:58.104 { 00:23:58.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:58.104 "dma_device_type": 2 00:23:58.104 } 00:23:58.104 ], 00:23:58.104 "driver_specific": {} 00:23:58.104 }' 00:23:58.104 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:58.104 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:58.369 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:58.369 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:58.369 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:58.369 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:58.369 04:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:58.369 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:58.369 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:58.369 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:58.369 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:58.627 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:58.627 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:58.627 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:58.627 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:58.627 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:58.627 "name": "BaseBdev3", 00:23:58.627 "aliases": [ 00:23:58.627 "4b25e60e-c1a2-4d13-8aba-185994af22c9" 00:23:58.627 ], 00:23:58.627 "product_name": "Malloc disk", 00:23:58.627 "block_size": 512, 00:23:58.627 "num_blocks": 65536, 00:23:58.627 "uuid": "4b25e60e-c1a2-4d13-8aba-185994af22c9", 00:23:58.627 "assigned_rate_limits": { 00:23:58.627 "rw_ios_per_sec": 0, 00:23:58.627 "rw_mbytes_per_sec": 0, 00:23:58.627 "r_mbytes_per_sec": 0, 00:23:58.627 "w_mbytes_per_sec": 0 00:23:58.627 }, 00:23:58.627 "claimed": true, 00:23:58.627 "claim_type": "exclusive_write", 00:23:58.627 "zoned": false, 00:23:58.627 "supported_io_types": { 00:23:58.627 "read": true, 00:23:58.627 "write": true, 00:23:58.627 "unmap": true, 00:23:58.627 "flush": true, 00:23:58.627 "reset": true, 00:23:58.627 "nvme_admin": false, 00:23:58.627 "nvme_io": false, 00:23:58.627 "nvme_io_md": false, 00:23:58.627 "write_zeroes": true, 00:23:58.627 "zcopy": true, 00:23:58.627 "get_zone_info": false, 00:23:58.627 "zone_management": false, 00:23:58.627 "zone_append": false, 00:23:58.627 "compare": false, 00:23:58.627 "compare_and_write": false, 00:23:58.627 "abort": true, 00:23:58.627 "seek_hole": false, 00:23:58.627 "seek_data": false, 00:23:58.627 "copy": true, 00:23:58.627 "nvme_iov_md": false 00:23:58.627 }, 00:23:58.627 "memory_domains": [ 00:23:58.627 { 00:23:58.627 "dma_device_id": "system", 00:23:58.627 "dma_device_type": 1 00:23:58.627 }, 00:23:58.627 { 00:23:58.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:58.627 "dma_device_type": 2 00:23:58.627 } 00:23:58.627 ], 00:23:58.627 "driver_specific": {} 00:23:58.627 }' 00:23:58.627 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:58.885 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:58.885 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:58.885 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:58.885 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:58.885 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:58.885 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:58.885 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:59.144 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:59.144 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:59.144 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:59.144 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:59.144 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:59.144 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:59.144 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:59.403 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:59.403 "name": "BaseBdev4", 00:23:59.403 "aliases": [ 00:23:59.403 "0fbbd443-8cdf-45de-b064-8b6cc649fcc7" 00:23:59.403 ], 00:23:59.403 "product_name": "Malloc disk", 00:23:59.403 "block_size": 512, 00:23:59.403 "num_blocks": 65536, 00:23:59.403 "uuid": "0fbbd443-8cdf-45de-b064-8b6cc649fcc7", 00:23:59.403 "assigned_rate_limits": { 00:23:59.403 "rw_ios_per_sec": 0, 00:23:59.403 "rw_mbytes_per_sec": 0, 00:23:59.403 "r_mbytes_per_sec": 0, 00:23:59.403 "w_mbytes_per_sec": 0 00:23:59.403 }, 00:23:59.403 "claimed": true, 00:23:59.403 "claim_type": "exclusive_write", 00:23:59.403 "zoned": false, 00:23:59.403 "supported_io_types": { 00:23:59.403 "read": true, 00:23:59.403 "write": true, 00:23:59.403 "unmap": true, 00:23:59.403 "flush": true, 00:23:59.403 "reset": true, 00:23:59.403 "nvme_admin": false, 00:23:59.403 "nvme_io": false, 00:23:59.403 "nvme_io_md": false, 00:23:59.403 "write_zeroes": true, 00:23:59.403 "zcopy": true, 00:23:59.403 "get_zone_info": false, 00:23:59.403 "zone_management": false, 00:23:59.403 "zone_append": false, 00:23:59.403 "compare": false, 00:23:59.403 "compare_and_write": false, 00:23:59.403 "abort": true, 00:23:59.403 "seek_hole": false, 00:23:59.403 "seek_data": false, 00:23:59.403 "copy": true, 00:23:59.403 "nvme_iov_md": false 00:23:59.403 }, 00:23:59.403 "memory_domains": [ 00:23:59.403 { 00:23:59.403 "dma_device_id": "system", 00:23:59.403 "dma_device_type": 1 00:23:59.403 }, 00:23:59.403 { 00:23:59.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:59.403 "dma_device_type": 2 00:23:59.403 } 00:23:59.403 ], 00:23:59.403 "driver_specific": {} 00:23:59.403 }' 00:23:59.403 04:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:59.403 04:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:59.403 04:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:59.403 04:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:59.403 04:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:59.403 04:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:59.403 04:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:59.662 04:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:59.662 04:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:59.662 04:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:59.662 04:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:59.662 04:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:59.662 04:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:59.922 [2024-07-23 04:19:08.520763] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:59.922 [2024-07-23 04:19:08.520796] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:59.922 [2024-07-23 04:19:08.520877] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:59.922 [2024-07-23 04:19:08.520956] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:59.922 [2024-07-23 04:19:08.520973] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name Existed_Raid, state offline 00:23:59.922 04:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2718613 00:23:59.922 04:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2718613 ']' 00:23:59.922 04:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2718613 00:23:59.922 04:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:23:59.922 04:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:59.922 04:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2718613 00:23:59.922 04:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:59.922 04:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:59.922 04:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2718613' 00:23:59.922 killing process with pid 2718613 00:23:59.922 04:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2718613 00:23:59.922 [2024-07-23 04:19:08.594280] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:59.922 04:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2718613 00:24:00.489 [2024-07-23 04:19:09.064807] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:02.391 04:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:24:02.391 00:24:02.391 real 0m35.422s 00:24:02.391 user 1m2.249s 00:24:02.391 sys 0m5.834s 00:24:02.391 04:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:02.391 04:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:02.391 ************************************ 00:24:02.391 END TEST raid_state_function_test 00:24:02.392 ************************************ 00:24:02.392 04:19:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:02.392 04:19:10 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:24:02.392 04:19:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:24:02.392 04:19:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:02.392 04:19:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:02.392 ************************************ 00:24:02.392 START TEST raid_state_function_test_sb 00:24:02.392 ************************************ 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 true 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2725271 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2725271' 00:24:02.392 Process raid pid: 2725271 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2725271 /var/tmp/spdk-raid.sock 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2725271 ']' 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:02.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:02.392 04:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:02.392 [2024-07-23 04:19:11.120729] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:02.392 [2024-07-23 04:19:11.120974] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:02.651 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:02.651 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:02.909 [2024-07-23 04:19:11.487686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.168 [2024-07-23 04:19:11.787118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.426 [2024-07-23 04:19:12.099277] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:03.426 [2024-07-23 04:19:12.099314] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:03.684 04:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:03.684 04:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:24:03.684 04:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:03.942 [2024-07-23 04:19:12.479629] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:03.942 [2024-07-23 04:19:12.479683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:03.942 [2024-07-23 04:19:12.479698] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:03.942 [2024-07-23 04:19:12.479714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:03.943 [2024-07-23 04:19:12.479726] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:03.943 [2024-07-23 04:19:12.479741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:03.943 [2024-07-23 04:19:12.479752] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:03.943 [2024-07-23 04:19:12.479768] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:03.943 04:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:03.943 04:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:03.943 04:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:03.943 04:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:03.943 04:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:03.943 04:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:03.943 04:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:03.943 04:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:03.943 04:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:03.943 04:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:03.943 04:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.943 04:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:04.200 04:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:04.200 "name": "Existed_Raid", 00:24:04.200 "uuid": "976c2fdf-fc1f-4e6c-ad8e-4cec5ea30821", 00:24:04.200 "strip_size_kb": 64, 00:24:04.200 "state": "configuring", 00:24:04.200 "raid_level": "concat", 00:24:04.200 "superblock": true, 00:24:04.200 "num_base_bdevs": 4, 00:24:04.200 "num_base_bdevs_discovered": 0, 00:24:04.200 "num_base_bdevs_operational": 4, 00:24:04.200 "base_bdevs_list": [ 00:24:04.200 { 00:24:04.200 "name": "BaseBdev1", 00:24:04.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.200 "is_configured": false, 00:24:04.200 "data_offset": 0, 00:24:04.200 "data_size": 0 00:24:04.200 }, 00:24:04.200 { 00:24:04.200 "name": "BaseBdev2", 00:24:04.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.200 "is_configured": false, 00:24:04.200 "data_offset": 0, 00:24:04.200 "data_size": 0 00:24:04.200 }, 00:24:04.200 { 00:24:04.200 "name": "BaseBdev3", 00:24:04.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.200 "is_configured": false, 00:24:04.200 "data_offset": 0, 00:24:04.200 "data_size": 0 00:24:04.200 }, 00:24:04.200 { 00:24:04.200 "name": "BaseBdev4", 00:24:04.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.200 "is_configured": false, 00:24:04.200 "data_offset": 0, 00:24:04.200 "data_size": 0 00:24:04.200 } 00:24:04.200 ] 00:24:04.200 }' 00:24:04.200 04:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:04.200 04:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:04.766 04:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:04.766 [2024-07-23 04:19:13.438267] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:04.766 [2024-07-23 04:19:13.438307] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:24:04.766 04:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:05.024 [2024-07-23 04:19:13.598766] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:05.024 [2024-07-23 04:19:13.598809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:05.024 [2024-07-23 04:19:13.598822] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:05.024 [2024-07-23 04:19:13.598846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:05.024 [2024-07-23 04:19:13.598857] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:05.024 [2024-07-23 04:19:13.598873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:05.024 [2024-07-23 04:19:13.598887] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:05.024 [2024-07-23 04:19:13.598904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:05.024 04:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:05.283 [2024-07-23 04:19:13.821401] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:05.283 BaseBdev1 00:24:05.283 04:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:24:05.283 04:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:05.283 04:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:05.283 04:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:05.283 04:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:05.283 04:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:05.283 04:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:05.283 04:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:05.541 [ 00:24:05.541 { 00:24:05.541 "name": "BaseBdev1", 00:24:05.541 "aliases": [ 00:24:05.541 "f992a844-5f15-4478-aa7b-b01b04e08a2b" 00:24:05.541 ], 00:24:05.541 "product_name": "Malloc disk", 00:24:05.541 "block_size": 512, 00:24:05.541 "num_blocks": 65536, 00:24:05.541 "uuid": "f992a844-5f15-4478-aa7b-b01b04e08a2b", 00:24:05.541 "assigned_rate_limits": { 00:24:05.541 "rw_ios_per_sec": 0, 00:24:05.541 "rw_mbytes_per_sec": 0, 00:24:05.541 "r_mbytes_per_sec": 0, 00:24:05.541 "w_mbytes_per_sec": 0 00:24:05.541 }, 00:24:05.541 "claimed": true, 00:24:05.541 "claim_type": "exclusive_write", 00:24:05.541 "zoned": false, 00:24:05.541 "supported_io_types": { 00:24:05.541 "read": true, 00:24:05.541 "write": true, 00:24:05.541 "unmap": true, 00:24:05.541 "flush": true, 00:24:05.541 "reset": true, 00:24:05.541 "nvme_admin": false, 00:24:05.541 "nvme_io": false, 00:24:05.541 "nvme_io_md": false, 00:24:05.541 "write_zeroes": true, 00:24:05.541 "zcopy": true, 00:24:05.541 "get_zone_info": false, 00:24:05.541 "zone_management": false, 00:24:05.541 "zone_append": false, 00:24:05.541 "compare": false, 00:24:05.541 "compare_and_write": false, 00:24:05.541 "abort": true, 00:24:05.541 "seek_hole": false, 00:24:05.541 "seek_data": false, 00:24:05.541 "copy": true, 00:24:05.541 "nvme_iov_md": false 00:24:05.541 }, 00:24:05.541 "memory_domains": [ 00:24:05.541 { 00:24:05.541 "dma_device_id": "system", 00:24:05.541 "dma_device_type": 1 00:24:05.541 }, 00:24:05.541 { 00:24:05.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.541 "dma_device_type": 2 00:24:05.541 } 00:24:05.541 ], 00:24:05.541 "driver_specific": {} 00:24:05.541 } 00:24:05.541 ] 00:24:05.541 04:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:05.541 04:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:05.541 04:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:05.541 04:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:05.541 04:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:05.541 04:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:05.541 04:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:05.541 04:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:05.541 04:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:05.541 04:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:05.541 04:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:05.542 04:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.542 04:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:05.800 04:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:05.800 "name": "Existed_Raid", 00:24:05.800 "uuid": "fa52dc1a-ee08-40a4-92dc-c79b39aedc99", 00:24:05.800 "strip_size_kb": 64, 00:24:05.800 "state": "configuring", 00:24:05.800 "raid_level": "concat", 00:24:05.800 "superblock": true, 00:24:05.800 "num_base_bdevs": 4, 00:24:05.800 "num_base_bdevs_discovered": 1, 00:24:05.800 "num_base_bdevs_operational": 4, 00:24:05.800 "base_bdevs_list": [ 00:24:05.800 { 00:24:05.800 "name": "BaseBdev1", 00:24:05.800 "uuid": "f992a844-5f15-4478-aa7b-b01b04e08a2b", 00:24:05.800 "is_configured": true, 00:24:05.800 "data_offset": 2048, 00:24:05.800 "data_size": 63488 00:24:05.800 }, 00:24:05.800 { 00:24:05.800 "name": "BaseBdev2", 00:24:05.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.800 "is_configured": false, 00:24:05.800 "data_offset": 0, 00:24:05.800 "data_size": 0 00:24:05.800 }, 00:24:05.800 { 00:24:05.800 "name": "BaseBdev3", 00:24:05.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.800 "is_configured": false, 00:24:05.800 "data_offset": 0, 00:24:05.800 "data_size": 0 00:24:05.800 }, 00:24:05.800 { 00:24:05.800 "name": "BaseBdev4", 00:24:05.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.800 "is_configured": false, 00:24:05.800 "data_offset": 0, 00:24:05.800 "data_size": 0 00:24:05.800 } 00:24:05.800 ] 00:24:05.800 }' 00:24:05.800 04:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:05.800 04:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:06.366 04:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:06.366 [2024-07-23 04:19:15.060776] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:06.366 [2024-07-23 04:19:15.060831] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:24:06.366 04:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:06.624 [2024-07-23 04:19:15.293514] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:06.624 [2024-07-23 04:19:15.295811] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:06.624 [2024-07-23 04:19:15.295854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:06.624 [2024-07-23 04:19:15.295869] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:06.624 [2024-07-23 04:19:15.295885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:06.624 [2024-07-23 04:19:15.295897] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:06.624 [2024-07-23 04:19:15.295915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:06.624 04:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:24:06.624 04:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:06.624 04:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:06.624 04:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:06.624 04:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:06.624 04:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:06.624 04:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:06.624 04:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:06.624 04:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:06.624 04:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:06.624 04:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:06.624 04:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:06.624 04:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.624 04:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:06.882 04:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:06.882 "name": "Existed_Raid", 00:24:06.882 "uuid": "7b4a4f30-92bb-44f0-a7a3-9ed033fc85ae", 00:24:06.882 "strip_size_kb": 64, 00:24:06.882 "state": "configuring", 00:24:06.882 "raid_level": "concat", 00:24:06.882 "superblock": true, 00:24:06.882 "num_base_bdevs": 4, 00:24:06.882 "num_base_bdevs_discovered": 1, 00:24:06.882 "num_base_bdevs_operational": 4, 00:24:06.882 "base_bdevs_list": [ 00:24:06.882 { 00:24:06.882 "name": "BaseBdev1", 00:24:06.882 "uuid": "f992a844-5f15-4478-aa7b-b01b04e08a2b", 00:24:06.882 "is_configured": true, 00:24:06.882 "data_offset": 2048, 00:24:06.882 "data_size": 63488 00:24:06.882 }, 00:24:06.882 { 00:24:06.882 "name": "BaseBdev2", 00:24:06.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.882 "is_configured": false, 00:24:06.882 "data_offset": 0, 00:24:06.882 "data_size": 0 00:24:06.882 }, 00:24:06.882 { 00:24:06.882 "name": "BaseBdev3", 00:24:06.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.882 "is_configured": false, 00:24:06.882 "data_offset": 0, 00:24:06.882 "data_size": 0 00:24:06.882 }, 00:24:06.882 { 00:24:06.882 "name": "BaseBdev4", 00:24:06.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.882 "is_configured": false, 00:24:06.882 "data_offset": 0, 00:24:06.882 "data_size": 0 00:24:06.882 } 00:24:06.882 ] 00:24:06.882 }' 00:24:06.882 04:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:06.882 04:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:07.448 04:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:07.706 [2024-07-23 04:19:16.258488] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:07.706 BaseBdev2 00:24:07.706 04:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:24:07.706 04:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:07.706 04:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:07.706 04:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:07.706 04:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:07.706 04:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:07.706 04:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:07.706 04:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:07.964 [ 00:24:07.964 { 00:24:07.964 "name": "BaseBdev2", 00:24:07.964 "aliases": [ 00:24:07.964 "b64ab86f-b7a6-4804-abf4-9e57ec98ff18" 00:24:07.964 ], 00:24:07.964 "product_name": "Malloc disk", 00:24:07.964 "block_size": 512, 00:24:07.964 "num_blocks": 65536, 00:24:07.964 "uuid": "b64ab86f-b7a6-4804-abf4-9e57ec98ff18", 00:24:07.964 "assigned_rate_limits": { 00:24:07.964 "rw_ios_per_sec": 0, 00:24:07.964 "rw_mbytes_per_sec": 0, 00:24:07.964 "r_mbytes_per_sec": 0, 00:24:07.964 "w_mbytes_per_sec": 0 00:24:07.964 }, 00:24:07.964 "claimed": true, 00:24:07.964 "claim_type": "exclusive_write", 00:24:07.964 "zoned": false, 00:24:07.964 "supported_io_types": { 00:24:07.964 "read": true, 00:24:07.964 "write": true, 00:24:07.964 "unmap": true, 00:24:07.964 "flush": true, 00:24:07.964 "reset": true, 00:24:07.964 "nvme_admin": false, 00:24:07.964 "nvme_io": false, 00:24:07.964 "nvme_io_md": false, 00:24:07.964 "write_zeroes": true, 00:24:07.964 "zcopy": true, 00:24:07.964 "get_zone_info": false, 00:24:07.964 "zone_management": false, 00:24:07.964 "zone_append": false, 00:24:07.964 "compare": false, 00:24:07.964 "compare_and_write": false, 00:24:07.964 "abort": true, 00:24:07.964 "seek_hole": false, 00:24:07.964 "seek_data": false, 00:24:07.964 "copy": true, 00:24:07.964 "nvme_iov_md": false 00:24:07.964 }, 00:24:07.964 "memory_domains": [ 00:24:07.964 { 00:24:07.964 "dma_device_id": "system", 00:24:07.964 "dma_device_type": 1 00:24:07.964 }, 00:24:07.964 { 00:24:07.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:07.964 "dma_device_type": 2 00:24:07.964 } 00:24:07.964 ], 00:24:07.964 "driver_specific": {} 00:24:07.964 } 00:24:07.964 ] 00:24:07.964 04:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:07.964 04:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:07.964 04:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:07.964 04:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:07.964 04:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:07.964 04:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:07.964 04:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:07.964 04:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:07.964 04:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:07.964 04:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:07.964 04:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:07.964 04:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:07.964 04:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:07.964 04:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.965 04:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:08.222 04:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:08.222 "name": "Existed_Raid", 00:24:08.222 "uuid": "7b4a4f30-92bb-44f0-a7a3-9ed033fc85ae", 00:24:08.222 "strip_size_kb": 64, 00:24:08.222 "state": "configuring", 00:24:08.222 "raid_level": "concat", 00:24:08.222 "superblock": true, 00:24:08.222 "num_base_bdevs": 4, 00:24:08.222 "num_base_bdevs_discovered": 2, 00:24:08.222 "num_base_bdevs_operational": 4, 00:24:08.222 "base_bdevs_list": [ 00:24:08.222 { 00:24:08.222 "name": "BaseBdev1", 00:24:08.222 "uuid": "f992a844-5f15-4478-aa7b-b01b04e08a2b", 00:24:08.222 "is_configured": true, 00:24:08.223 "data_offset": 2048, 00:24:08.223 "data_size": 63488 00:24:08.223 }, 00:24:08.223 { 00:24:08.223 "name": "BaseBdev2", 00:24:08.223 "uuid": "b64ab86f-b7a6-4804-abf4-9e57ec98ff18", 00:24:08.223 "is_configured": true, 00:24:08.223 "data_offset": 2048, 00:24:08.223 "data_size": 63488 00:24:08.223 }, 00:24:08.223 { 00:24:08.223 "name": "BaseBdev3", 00:24:08.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.223 "is_configured": false, 00:24:08.223 "data_offset": 0, 00:24:08.223 "data_size": 0 00:24:08.223 }, 00:24:08.223 { 00:24:08.223 "name": "BaseBdev4", 00:24:08.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.223 "is_configured": false, 00:24:08.223 "data_offset": 0, 00:24:08.223 "data_size": 0 00:24:08.223 } 00:24:08.223 ] 00:24:08.223 }' 00:24:08.223 04:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:08.223 04:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:08.787 04:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:09.045 [2024-07-23 04:19:17.637607] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:09.045 BaseBdev3 00:24:09.045 04:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:24:09.045 04:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:09.045 04:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:09.045 04:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:09.045 04:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:09.045 04:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:09.045 04:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:09.302 04:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:09.561 [ 00:24:09.561 { 00:24:09.561 "name": "BaseBdev3", 00:24:09.561 "aliases": [ 00:24:09.561 "fadf8a32-a5e8-4a3d-bb3b-019c2af8184e" 00:24:09.561 ], 00:24:09.561 "product_name": "Malloc disk", 00:24:09.561 "block_size": 512, 00:24:09.561 "num_blocks": 65536, 00:24:09.561 "uuid": "fadf8a32-a5e8-4a3d-bb3b-019c2af8184e", 00:24:09.562 "assigned_rate_limits": { 00:24:09.562 "rw_ios_per_sec": 0, 00:24:09.562 "rw_mbytes_per_sec": 0, 00:24:09.562 "r_mbytes_per_sec": 0, 00:24:09.562 "w_mbytes_per_sec": 0 00:24:09.562 }, 00:24:09.562 "claimed": true, 00:24:09.562 "claim_type": "exclusive_write", 00:24:09.562 "zoned": false, 00:24:09.562 "supported_io_types": { 00:24:09.562 "read": true, 00:24:09.562 "write": true, 00:24:09.562 "unmap": true, 00:24:09.562 "flush": true, 00:24:09.562 "reset": true, 00:24:09.562 "nvme_admin": false, 00:24:09.562 "nvme_io": false, 00:24:09.562 "nvme_io_md": false, 00:24:09.562 "write_zeroes": true, 00:24:09.562 "zcopy": true, 00:24:09.562 "get_zone_info": false, 00:24:09.562 "zone_management": false, 00:24:09.562 "zone_append": false, 00:24:09.562 "compare": false, 00:24:09.562 "compare_and_write": false, 00:24:09.562 "abort": true, 00:24:09.562 "seek_hole": false, 00:24:09.562 "seek_data": false, 00:24:09.562 "copy": true, 00:24:09.562 "nvme_iov_md": false 00:24:09.562 }, 00:24:09.562 "memory_domains": [ 00:24:09.562 { 00:24:09.562 "dma_device_id": "system", 00:24:09.562 "dma_device_type": 1 00:24:09.562 }, 00:24:09.562 { 00:24:09.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:09.562 "dma_device_type": 2 00:24:09.562 } 00:24:09.562 ], 00:24:09.562 "driver_specific": {} 00:24:09.562 } 00:24:09.562 ] 00:24:09.562 04:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:09.562 04:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:09.562 04:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:09.562 04:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:09.562 04:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:09.562 04:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:09.562 04:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:09.562 04:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:09.562 04:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:09.562 04:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:09.562 04:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:09.562 04:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:09.562 04:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:09.562 04:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.562 04:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:09.562 04:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:09.562 "name": "Existed_Raid", 00:24:09.562 "uuid": "7b4a4f30-92bb-44f0-a7a3-9ed033fc85ae", 00:24:09.562 "strip_size_kb": 64, 00:24:09.562 "state": "configuring", 00:24:09.562 "raid_level": "concat", 00:24:09.562 "superblock": true, 00:24:09.562 "num_base_bdevs": 4, 00:24:09.562 "num_base_bdevs_discovered": 3, 00:24:09.562 "num_base_bdevs_operational": 4, 00:24:09.562 "base_bdevs_list": [ 00:24:09.562 { 00:24:09.562 "name": "BaseBdev1", 00:24:09.562 "uuid": "f992a844-5f15-4478-aa7b-b01b04e08a2b", 00:24:09.562 "is_configured": true, 00:24:09.562 "data_offset": 2048, 00:24:09.562 "data_size": 63488 00:24:09.562 }, 00:24:09.562 { 00:24:09.562 "name": "BaseBdev2", 00:24:09.562 "uuid": "b64ab86f-b7a6-4804-abf4-9e57ec98ff18", 00:24:09.562 "is_configured": true, 00:24:09.562 "data_offset": 2048, 00:24:09.562 "data_size": 63488 00:24:09.562 }, 00:24:09.562 { 00:24:09.562 "name": "BaseBdev3", 00:24:09.562 "uuid": "fadf8a32-a5e8-4a3d-bb3b-019c2af8184e", 00:24:09.562 "is_configured": true, 00:24:09.562 "data_offset": 2048, 00:24:09.562 "data_size": 63488 00:24:09.562 }, 00:24:09.562 { 00:24:09.562 "name": "BaseBdev4", 00:24:09.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.562 "is_configured": false, 00:24:09.562 "data_offset": 0, 00:24:09.562 "data_size": 0 00:24:09.562 } 00:24:09.562 ] 00:24:09.562 }' 00:24:09.562 04:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:09.562 04:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:10.128 04:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:10.386 [2024-07-23 04:19:19.118767] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:10.386 [2024-07-23 04:19:19.119050] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:24:10.386 [2024-07-23 04:19:19.119074] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:10.386 [2024-07-23 04:19:19.119414] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:24:10.386 [2024-07-23 04:19:19.119659] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:24:10.386 [2024-07-23 04:19:19.119677] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:24:10.386 [2024-07-23 04:19:19.119848] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:10.386 BaseBdev4 00:24:10.386 04:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:24:10.386 04:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:10.386 04:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:10.386 04:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:10.386 04:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:10.386 04:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:10.386 04:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:10.680 04:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:10.938 [ 00:24:10.938 { 00:24:10.938 "name": "BaseBdev4", 00:24:10.938 "aliases": [ 00:24:10.938 "bd7bb44c-e5d4-4fd6-84bb-d4fcb73943e2" 00:24:10.938 ], 00:24:10.938 "product_name": "Malloc disk", 00:24:10.938 "block_size": 512, 00:24:10.938 "num_blocks": 65536, 00:24:10.938 "uuid": "bd7bb44c-e5d4-4fd6-84bb-d4fcb73943e2", 00:24:10.938 "assigned_rate_limits": { 00:24:10.938 "rw_ios_per_sec": 0, 00:24:10.938 "rw_mbytes_per_sec": 0, 00:24:10.938 "r_mbytes_per_sec": 0, 00:24:10.938 "w_mbytes_per_sec": 0 00:24:10.938 }, 00:24:10.938 "claimed": true, 00:24:10.938 "claim_type": "exclusive_write", 00:24:10.938 "zoned": false, 00:24:10.938 "supported_io_types": { 00:24:10.938 "read": true, 00:24:10.938 "write": true, 00:24:10.938 "unmap": true, 00:24:10.938 "flush": true, 00:24:10.938 "reset": true, 00:24:10.938 "nvme_admin": false, 00:24:10.938 "nvme_io": false, 00:24:10.938 "nvme_io_md": false, 00:24:10.938 "write_zeroes": true, 00:24:10.938 "zcopy": true, 00:24:10.938 "get_zone_info": false, 00:24:10.938 "zone_management": false, 00:24:10.938 "zone_append": false, 00:24:10.938 "compare": false, 00:24:10.938 "compare_and_write": false, 00:24:10.938 "abort": true, 00:24:10.938 "seek_hole": false, 00:24:10.938 "seek_data": false, 00:24:10.938 "copy": true, 00:24:10.938 "nvme_iov_md": false 00:24:10.938 }, 00:24:10.938 "memory_domains": [ 00:24:10.938 { 00:24:10.938 "dma_device_id": "system", 00:24:10.938 "dma_device_type": 1 00:24:10.938 }, 00:24:10.938 { 00:24:10.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:10.938 "dma_device_type": 2 00:24:10.938 } 00:24:10.938 ], 00:24:10.938 "driver_specific": {} 00:24:10.938 } 00:24:10.938 ] 00:24:10.938 04:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:10.938 04:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:10.938 04:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:10.938 04:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:24:10.938 04:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:10.938 04:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:10.938 04:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:10.938 04:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:10.938 04:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:10.938 04:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:10.938 04:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:10.938 04:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:10.938 04:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:10.938 04:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:10.938 04:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.196 04:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:11.196 "name": "Existed_Raid", 00:24:11.196 "uuid": "7b4a4f30-92bb-44f0-a7a3-9ed033fc85ae", 00:24:11.196 "strip_size_kb": 64, 00:24:11.196 "state": "online", 00:24:11.196 "raid_level": "concat", 00:24:11.196 "superblock": true, 00:24:11.196 "num_base_bdevs": 4, 00:24:11.196 "num_base_bdevs_discovered": 4, 00:24:11.196 "num_base_bdevs_operational": 4, 00:24:11.196 "base_bdevs_list": [ 00:24:11.196 { 00:24:11.196 "name": "BaseBdev1", 00:24:11.196 "uuid": "f992a844-5f15-4478-aa7b-b01b04e08a2b", 00:24:11.196 "is_configured": true, 00:24:11.196 "data_offset": 2048, 00:24:11.196 "data_size": 63488 00:24:11.196 }, 00:24:11.196 { 00:24:11.196 "name": "BaseBdev2", 00:24:11.196 "uuid": "b64ab86f-b7a6-4804-abf4-9e57ec98ff18", 00:24:11.196 "is_configured": true, 00:24:11.196 "data_offset": 2048, 00:24:11.196 "data_size": 63488 00:24:11.196 }, 00:24:11.196 { 00:24:11.196 "name": "BaseBdev3", 00:24:11.196 "uuid": "fadf8a32-a5e8-4a3d-bb3b-019c2af8184e", 00:24:11.196 "is_configured": true, 00:24:11.196 "data_offset": 2048, 00:24:11.196 "data_size": 63488 00:24:11.196 }, 00:24:11.196 { 00:24:11.196 "name": "BaseBdev4", 00:24:11.196 "uuid": "bd7bb44c-e5d4-4fd6-84bb-d4fcb73943e2", 00:24:11.196 "is_configured": true, 00:24:11.196 "data_offset": 2048, 00:24:11.196 "data_size": 63488 00:24:11.196 } 00:24:11.196 ] 00:24:11.196 }' 00:24:11.196 04:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:11.196 04:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.762 04:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:24:11.762 04:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:11.762 04:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:11.762 04:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:11.762 04:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:11.762 04:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:24:11.762 04:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:11.762 04:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:12.021 [2024-07-23 04:19:20.607299] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:12.021 04:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:12.021 "name": "Existed_Raid", 00:24:12.021 "aliases": [ 00:24:12.021 "7b4a4f30-92bb-44f0-a7a3-9ed033fc85ae" 00:24:12.021 ], 00:24:12.021 "product_name": "Raid Volume", 00:24:12.021 "block_size": 512, 00:24:12.021 "num_blocks": 253952, 00:24:12.021 "uuid": "7b4a4f30-92bb-44f0-a7a3-9ed033fc85ae", 00:24:12.021 "assigned_rate_limits": { 00:24:12.021 "rw_ios_per_sec": 0, 00:24:12.021 "rw_mbytes_per_sec": 0, 00:24:12.021 "r_mbytes_per_sec": 0, 00:24:12.021 "w_mbytes_per_sec": 0 00:24:12.021 }, 00:24:12.021 "claimed": false, 00:24:12.021 "zoned": false, 00:24:12.021 "supported_io_types": { 00:24:12.021 "read": true, 00:24:12.021 "write": true, 00:24:12.021 "unmap": true, 00:24:12.021 "flush": true, 00:24:12.021 "reset": true, 00:24:12.021 "nvme_admin": false, 00:24:12.021 "nvme_io": false, 00:24:12.021 "nvme_io_md": false, 00:24:12.021 "write_zeroes": true, 00:24:12.021 "zcopy": false, 00:24:12.021 "get_zone_info": false, 00:24:12.021 "zone_management": false, 00:24:12.021 "zone_append": false, 00:24:12.021 "compare": false, 00:24:12.021 "compare_and_write": false, 00:24:12.021 "abort": false, 00:24:12.021 "seek_hole": false, 00:24:12.021 "seek_data": false, 00:24:12.021 "copy": false, 00:24:12.021 "nvme_iov_md": false 00:24:12.021 }, 00:24:12.021 "memory_domains": [ 00:24:12.021 { 00:24:12.021 "dma_device_id": "system", 00:24:12.021 "dma_device_type": 1 00:24:12.021 }, 00:24:12.021 { 00:24:12.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:12.021 "dma_device_type": 2 00:24:12.021 }, 00:24:12.021 { 00:24:12.021 "dma_device_id": "system", 00:24:12.021 "dma_device_type": 1 00:24:12.021 }, 00:24:12.021 { 00:24:12.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:12.021 "dma_device_type": 2 00:24:12.021 }, 00:24:12.021 { 00:24:12.021 "dma_device_id": "system", 00:24:12.021 "dma_device_type": 1 00:24:12.021 }, 00:24:12.021 { 00:24:12.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:12.021 "dma_device_type": 2 00:24:12.021 }, 00:24:12.021 { 00:24:12.021 "dma_device_id": "system", 00:24:12.021 "dma_device_type": 1 00:24:12.021 }, 00:24:12.021 { 00:24:12.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:12.021 "dma_device_type": 2 00:24:12.021 } 00:24:12.021 ], 00:24:12.021 "driver_specific": { 00:24:12.021 "raid": { 00:24:12.021 "uuid": "7b4a4f30-92bb-44f0-a7a3-9ed033fc85ae", 00:24:12.021 "strip_size_kb": 64, 00:24:12.021 "state": "online", 00:24:12.021 "raid_level": "concat", 00:24:12.021 "superblock": true, 00:24:12.021 "num_base_bdevs": 4, 00:24:12.021 "num_base_bdevs_discovered": 4, 00:24:12.021 "num_base_bdevs_operational": 4, 00:24:12.021 "base_bdevs_list": [ 00:24:12.021 { 00:24:12.021 "name": "BaseBdev1", 00:24:12.021 "uuid": "f992a844-5f15-4478-aa7b-b01b04e08a2b", 00:24:12.021 "is_configured": true, 00:24:12.021 "data_offset": 2048, 00:24:12.021 "data_size": 63488 00:24:12.021 }, 00:24:12.021 { 00:24:12.021 "name": "BaseBdev2", 00:24:12.021 "uuid": "b64ab86f-b7a6-4804-abf4-9e57ec98ff18", 00:24:12.021 "is_configured": true, 00:24:12.021 "data_offset": 2048, 00:24:12.021 "data_size": 63488 00:24:12.021 }, 00:24:12.021 { 00:24:12.021 "name": "BaseBdev3", 00:24:12.021 "uuid": "fadf8a32-a5e8-4a3d-bb3b-019c2af8184e", 00:24:12.021 "is_configured": true, 00:24:12.021 "data_offset": 2048, 00:24:12.021 "data_size": 63488 00:24:12.021 }, 00:24:12.021 { 00:24:12.021 "name": "BaseBdev4", 00:24:12.021 "uuid": "bd7bb44c-e5d4-4fd6-84bb-d4fcb73943e2", 00:24:12.021 "is_configured": true, 00:24:12.021 "data_offset": 2048, 00:24:12.021 "data_size": 63488 00:24:12.021 } 00:24:12.021 ] 00:24:12.021 } 00:24:12.021 } 00:24:12.021 }' 00:24:12.021 04:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:12.021 04:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:24:12.021 BaseBdev2 00:24:12.021 BaseBdev3 00:24:12.021 BaseBdev4' 00:24:12.021 04:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:12.021 04:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:12.021 04:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:12.279 04:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:12.279 "name": "BaseBdev1", 00:24:12.279 "aliases": [ 00:24:12.279 "f992a844-5f15-4478-aa7b-b01b04e08a2b" 00:24:12.279 ], 00:24:12.279 "product_name": "Malloc disk", 00:24:12.279 "block_size": 512, 00:24:12.279 "num_blocks": 65536, 00:24:12.279 "uuid": "f992a844-5f15-4478-aa7b-b01b04e08a2b", 00:24:12.279 "assigned_rate_limits": { 00:24:12.279 "rw_ios_per_sec": 0, 00:24:12.279 "rw_mbytes_per_sec": 0, 00:24:12.279 "r_mbytes_per_sec": 0, 00:24:12.279 "w_mbytes_per_sec": 0 00:24:12.279 }, 00:24:12.279 "claimed": true, 00:24:12.279 "claim_type": "exclusive_write", 00:24:12.279 "zoned": false, 00:24:12.279 "supported_io_types": { 00:24:12.279 "read": true, 00:24:12.279 "write": true, 00:24:12.279 "unmap": true, 00:24:12.279 "flush": true, 00:24:12.279 "reset": true, 00:24:12.279 "nvme_admin": false, 00:24:12.279 "nvme_io": false, 00:24:12.279 "nvme_io_md": false, 00:24:12.279 "write_zeroes": true, 00:24:12.279 "zcopy": true, 00:24:12.279 "get_zone_info": false, 00:24:12.279 "zone_management": false, 00:24:12.279 "zone_append": false, 00:24:12.279 "compare": false, 00:24:12.279 "compare_and_write": false, 00:24:12.279 "abort": true, 00:24:12.279 "seek_hole": false, 00:24:12.279 "seek_data": false, 00:24:12.279 "copy": true, 00:24:12.279 "nvme_iov_md": false 00:24:12.279 }, 00:24:12.279 "memory_domains": [ 00:24:12.279 { 00:24:12.279 "dma_device_id": "system", 00:24:12.279 "dma_device_type": 1 00:24:12.279 }, 00:24:12.279 { 00:24:12.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:12.279 "dma_device_type": 2 00:24:12.279 } 00:24:12.279 ], 00:24:12.279 "driver_specific": {} 00:24:12.279 }' 00:24:12.279 04:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:12.279 04:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:12.279 04:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:12.279 04:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:12.279 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:12.537 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:12.537 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:12.537 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:12.537 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:12.537 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:12.537 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:12.537 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:12.537 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:12.537 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:12.537 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:12.795 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:12.795 "name": "BaseBdev2", 00:24:12.795 "aliases": [ 00:24:12.795 "b64ab86f-b7a6-4804-abf4-9e57ec98ff18" 00:24:12.795 ], 00:24:12.795 "product_name": "Malloc disk", 00:24:12.795 "block_size": 512, 00:24:12.795 "num_blocks": 65536, 00:24:12.795 "uuid": "b64ab86f-b7a6-4804-abf4-9e57ec98ff18", 00:24:12.795 "assigned_rate_limits": { 00:24:12.795 "rw_ios_per_sec": 0, 00:24:12.795 "rw_mbytes_per_sec": 0, 00:24:12.795 "r_mbytes_per_sec": 0, 00:24:12.795 "w_mbytes_per_sec": 0 00:24:12.795 }, 00:24:12.795 "claimed": true, 00:24:12.795 "claim_type": "exclusive_write", 00:24:12.795 "zoned": false, 00:24:12.795 "supported_io_types": { 00:24:12.795 "read": true, 00:24:12.795 "write": true, 00:24:12.795 "unmap": true, 00:24:12.795 "flush": true, 00:24:12.795 "reset": true, 00:24:12.795 "nvme_admin": false, 00:24:12.795 "nvme_io": false, 00:24:12.795 "nvme_io_md": false, 00:24:12.795 "write_zeroes": true, 00:24:12.795 "zcopy": true, 00:24:12.795 "get_zone_info": false, 00:24:12.795 "zone_management": false, 00:24:12.795 "zone_append": false, 00:24:12.795 "compare": false, 00:24:12.795 "compare_and_write": false, 00:24:12.795 "abort": true, 00:24:12.795 "seek_hole": false, 00:24:12.795 "seek_data": false, 00:24:12.795 "copy": true, 00:24:12.795 "nvme_iov_md": false 00:24:12.795 }, 00:24:12.795 "memory_domains": [ 00:24:12.795 { 00:24:12.795 "dma_device_id": "system", 00:24:12.795 "dma_device_type": 1 00:24:12.795 }, 00:24:12.795 { 00:24:12.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:12.795 "dma_device_type": 2 00:24:12.795 } 00:24:12.795 ], 00:24:12.795 "driver_specific": {} 00:24:12.795 }' 00:24:12.795 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:12.795 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:12.795 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:12.795 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:13.053 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:13.053 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:13.053 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:13.053 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:13.053 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:13.053 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:13.053 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:13.053 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:13.053 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:13.053 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:13.053 04:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:13.311 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:13.311 "name": "BaseBdev3", 00:24:13.311 "aliases": [ 00:24:13.311 "fadf8a32-a5e8-4a3d-bb3b-019c2af8184e" 00:24:13.311 ], 00:24:13.311 "product_name": "Malloc disk", 00:24:13.311 "block_size": 512, 00:24:13.311 "num_blocks": 65536, 00:24:13.311 "uuid": "fadf8a32-a5e8-4a3d-bb3b-019c2af8184e", 00:24:13.311 "assigned_rate_limits": { 00:24:13.311 "rw_ios_per_sec": 0, 00:24:13.311 "rw_mbytes_per_sec": 0, 00:24:13.311 "r_mbytes_per_sec": 0, 00:24:13.311 "w_mbytes_per_sec": 0 00:24:13.311 }, 00:24:13.311 "claimed": true, 00:24:13.311 "claim_type": "exclusive_write", 00:24:13.311 "zoned": false, 00:24:13.311 "supported_io_types": { 00:24:13.311 "read": true, 00:24:13.311 "write": true, 00:24:13.311 "unmap": true, 00:24:13.311 "flush": true, 00:24:13.311 "reset": true, 00:24:13.311 "nvme_admin": false, 00:24:13.311 "nvme_io": false, 00:24:13.311 "nvme_io_md": false, 00:24:13.311 "write_zeroes": true, 00:24:13.311 "zcopy": true, 00:24:13.311 "get_zone_info": false, 00:24:13.311 "zone_management": false, 00:24:13.311 "zone_append": false, 00:24:13.311 "compare": false, 00:24:13.311 "compare_and_write": false, 00:24:13.311 "abort": true, 00:24:13.311 "seek_hole": false, 00:24:13.311 "seek_data": false, 00:24:13.311 "copy": true, 00:24:13.311 "nvme_iov_md": false 00:24:13.311 }, 00:24:13.311 "memory_domains": [ 00:24:13.311 { 00:24:13.311 "dma_device_id": "system", 00:24:13.311 "dma_device_type": 1 00:24:13.311 }, 00:24:13.311 { 00:24:13.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:13.311 "dma_device_type": 2 00:24:13.311 } 00:24:13.311 ], 00:24:13.311 "driver_specific": {} 00:24:13.311 }' 00:24:13.311 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:13.311 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:13.568 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:13.568 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:13.568 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:13.568 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:13.568 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:13.568 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:13.568 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:13.568 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:13.568 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:13.826 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:13.826 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:13.826 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:13.826 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:13.826 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:13.826 "name": "BaseBdev4", 00:24:13.826 "aliases": [ 00:24:13.826 "bd7bb44c-e5d4-4fd6-84bb-d4fcb73943e2" 00:24:13.826 ], 00:24:13.826 "product_name": "Malloc disk", 00:24:13.826 "block_size": 512, 00:24:13.826 "num_blocks": 65536, 00:24:13.826 "uuid": "bd7bb44c-e5d4-4fd6-84bb-d4fcb73943e2", 00:24:13.826 "assigned_rate_limits": { 00:24:13.826 "rw_ios_per_sec": 0, 00:24:13.826 "rw_mbytes_per_sec": 0, 00:24:13.826 "r_mbytes_per_sec": 0, 00:24:13.826 "w_mbytes_per_sec": 0 00:24:13.826 }, 00:24:13.826 "claimed": true, 00:24:13.826 "claim_type": "exclusive_write", 00:24:13.826 "zoned": false, 00:24:13.826 "supported_io_types": { 00:24:13.826 "read": true, 00:24:13.826 "write": true, 00:24:13.826 "unmap": true, 00:24:13.826 "flush": true, 00:24:13.826 "reset": true, 00:24:13.826 "nvme_admin": false, 00:24:13.826 "nvme_io": false, 00:24:13.826 "nvme_io_md": false, 00:24:13.826 "write_zeroes": true, 00:24:13.826 "zcopy": true, 00:24:13.826 "get_zone_info": false, 00:24:13.826 "zone_management": false, 00:24:13.826 "zone_append": false, 00:24:13.826 "compare": false, 00:24:13.826 "compare_and_write": false, 00:24:13.826 "abort": true, 00:24:13.826 "seek_hole": false, 00:24:13.826 "seek_data": false, 00:24:13.826 "copy": true, 00:24:13.826 "nvme_iov_md": false 00:24:13.826 }, 00:24:13.826 "memory_domains": [ 00:24:13.826 { 00:24:13.826 "dma_device_id": "system", 00:24:13.826 "dma_device_type": 1 00:24:13.826 }, 00:24:13.826 { 00:24:13.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:13.826 "dma_device_type": 2 00:24:13.826 } 00:24:13.826 ], 00:24:13.826 "driver_specific": {} 00:24:13.826 }' 00:24:13.826 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:14.084 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:14.084 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:14.084 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:14.084 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:14.084 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:14.084 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:14.084 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:14.084 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:14.084 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:14.084 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:14.340 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:14.340 04:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:14.340 [2024-07-23 04:19:23.077648] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:14.340 [2024-07-23 04:19:23.077683] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:14.340 [2024-07-23 04:19:23.077740] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:14.598 04:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:24:14.598 04:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:24:14.598 04:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:14.598 04:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:24:14.598 04:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:24:14.598 04:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:24:14.598 04:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:14.598 04:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:24:14.598 04:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:14.598 04:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:14.598 04:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:14.598 04:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:14.598 04:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:14.598 04:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:14.598 04:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:14.598 04:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.598 04:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:14.598 04:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:14.598 "name": "Existed_Raid", 00:24:14.598 "uuid": "7b4a4f30-92bb-44f0-a7a3-9ed033fc85ae", 00:24:14.598 "strip_size_kb": 64, 00:24:14.598 "state": "offline", 00:24:14.598 "raid_level": "concat", 00:24:14.598 "superblock": true, 00:24:14.598 "num_base_bdevs": 4, 00:24:14.598 "num_base_bdevs_discovered": 3, 00:24:14.598 "num_base_bdevs_operational": 3, 00:24:14.598 "base_bdevs_list": [ 00:24:14.598 { 00:24:14.598 "name": null, 00:24:14.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.598 "is_configured": false, 00:24:14.598 "data_offset": 2048, 00:24:14.598 "data_size": 63488 00:24:14.598 }, 00:24:14.598 { 00:24:14.598 "name": "BaseBdev2", 00:24:14.598 "uuid": "b64ab86f-b7a6-4804-abf4-9e57ec98ff18", 00:24:14.598 "is_configured": true, 00:24:14.598 "data_offset": 2048, 00:24:14.598 "data_size": 63488 00:24:14.598 }, 00:24:14.598 { 00:24:14.598 "name": "BaseBdev3", 00:24:14.598 "uuid": "fadf8a32-a5e8-4a3d-bb3b-019c2af8184e", 00:24:14.598 "is_configured": true, 00:24:14.598 "data_offset": 2048, 00:24:14.598 "data_size": 63488 00:24:14.598 }, 00:24:14.598 { 00:24:14.598 "name": "BaseBdev4", 00:24:14.598 "uuid": "bd7bb44c-e5d4-4fd6-84bb-d4fcb73943e2", 00:24:14.598 "is_configured": true, 00:24:14.598 "data_offset": 2048, 00:24:14.598 "data_size": 63488 00:24:14.598 } 00:24:14.598 ] 00:24:14.598 }' 00:24:14.598 04:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:14.598 04:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.162 04:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:24:15.162 04:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:15.162 04:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.162 04:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:15.420 04:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:15.420 04:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:15.420 04:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:15.678 [2024-07-23 04:19:24.252799] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:15.678 04:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:15.678 04:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:15.678 04:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.678 04:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:15.936 04:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:15.936 04:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:15.936 04:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:16.194 [2024-07-23 04:19:24.845071] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:16.452 04:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:16.452 04:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:16.452 04:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:16.452 04:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.452 04:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:16.452 04:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:16.452 04:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:16.710 [2024-07-23 04:19:25.432739] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:16.710 [2024-07-23 04:19:25.432796] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:24:16.968 04:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:16.968 04:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:16.968 04:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.968 04:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:24:17.227 04:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:24:17.227 04:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:24:17.227 04:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:24:17.227 04:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:24:17.227 04:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:17.227 04:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:17.485 BaseBdev2 00:24:17.485 04:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:24:17.485 04:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:17.485 04:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:17.485 04:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:17.485 04:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:17.485 04:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:17.485 04:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:17.743 04:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:17.743 [ 00:24:17.743 { 00:24:17.743 "name": "BaseBdev2", 00:24:17.743 "aliases": [ 00:24:17.743 "7b27a0e0-3f44-4567-bf81-f420e20a142e" 00:24:17.743 ], 00:24:17.743 "product_name": "Malloc disk", 00:24:17.743 "block_size": 512, 00:24:17.743 "num_blocks": 65536, 00:24:17.743 "uuid": "7b27a0e0-3f44-4567-bf81-f420e20a142e", 00:24:17.743 "assigned_rate_limits": { 00:24:17.743 "rw_ios_per_sec": 0, 00:24:17.743 "rw_mbytes_per_sec": 0, 00:24:17.743 "r_mbytes_per_sec": 0, 00:24:17.743 "w_mbytes_per_sec": 0 00:24:17.743 }, 00:24:17.743 "claimed": false, 00:24:17.743 "zoned": false, 00:24:17.743 "supported_io_types": { 00:24:17.743 "read": true, 00:24:17.743 "write": true, 00:24:17.743 "unmap": true, 00:24:17.743 "flush": true, 00:24:17.743 "reset": true, 00:24:17.743 "nvme_admin": false, 00:24:17.743 "nvme_io": false, 00:24:17.744 "nvme_io_md": false, 00:24:17.744 "write_zeroes": true, 00:24:17.744 "zcopy": true, 00:24:17.744 "get_zone_info": false, 00:24:17.744 "zone_management": false, 00:24:17.744 "zone_append": false, 00:24:17.744 "compare": false, 00:24:17.744 "compare_and_write": false, 00:24:17.744 "abort": true, 00:24:17.744 "seek_hole": false, 00:24:17.744 "seek_data": false, 00:24:17.744 "copy": true, 00:24:17.744 "nvme_iov_md": false 00:24:17.744 }, 00:24:17.744 "memory_domains": [ 00:24:17.744 { 00:24:17.744 "dma_device_id": "system", 00:24:17.744 "dma_device_type": 1 00:24:17.744 }, 00:24:17.744 { 00:24:17.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:17.744 "dma_device_type": 2 00:24:17.744 } 00:24:17.744 ], 00:24:17.744 "driver_specific": {} 00:24:17.744 } 00:24:17.744 ] 00:24:18.002 04:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:18.002 04:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:18.002 04:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:18.002 04:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:18.260 BaseBdev3 00:24:18.260 04:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:24:18.260 04:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:18.260 04:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:18.260 04:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:18.260 04:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:18.260 04:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:18.260 04:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:18.260 04:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:18.519 [ 00:24:18.519 { 00:24:18.519 "name": "BaseBdev3", 00:24:18.519 "aliases": [ 00:24:18.519 "3c4f317c-0521-41f4-b1c4-308796955fb0" 00:24:18.519 ], 00:24:18.519 "product_name": "Malloc disk", 00:24:18.519 "block_size": 512, 00:24:18.519 "num_blocks": 65536, 00:24:18.519 "uuid": "3c4f317c-0521-41f4-b1c4-308796955fb0", 00:24:18.519 "assigned_rate_limits": { 00:24:18.519 "rw_ios_per_sec": 0, 00:24:18.519 "rw_mbytes_per_sec": 0, 00:24:18.519 "r_mbytes_per_sec": 0, 00:24:18.519 "w_mbytes_per_sec": 0 00:24:18.519 }, 00:24:18.519 "claimed": false, 00:24:18.519 "zoned": false, 00:24:18.519 "supported_io_types": { 00:24:18.519 "read": true, 00:24:18.519 "write": true, 00:24:18.519 "unmap": true, 00:24:18.519 "flush": true, 00:24:18.519 "reset": true, 00:24:18.519 "nvme_admin": false, 00:24:18.519 "nvme_io": false, 00:24:18.519 "nvme_io_md": false, 00:24:18.519 "write_zeroes": true, 00:24:18.519 "zcopy": true, 00:24:18.519 "get_zone_info": false, 00:24:18.519 "zone_management": false, 00:24:18.519 "zone_append": false, 00:24:18.519 "compare": false, 00:24:18.519 "compare_and_write": false, 00:24:18.519 "abort": true, 00:24:18.519 "seek_hole": false, 00:24:18.519 "seek_data": false, 00:24:18.519 "copy": true, 00:24:18.519 "nvme_iov_md": false 00:24:18.519 }, 00:24:18.519 "memory_domains": [ 00:24:18.519 { 00:24:18.519 "dma_device_id": "system", 00:24:18.519 "dma_device_type": 1 00:24:18.519 }, 00:24:18.519 { 00:24:18.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:18.519 "dma_device_type": 2 00:24:18.519 } 00:24:18.519 ], 00:24:18.519 "driver_specific": {} 00:24:18.519 } 00:24:18.519 ] 00:24:18.519 04:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:18.519 04:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:18.519 04:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:18.519 04:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:18.777 BaseBdev4 00:24:18.777 04:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:24:18.777 04:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:18.777 04:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:18.777 04:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:18.777 04:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:18.777 04:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:18.777 04:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:19.035 04:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:19.294 [ 00:24:19.294 { 00:24:19.294 "name": "BaseBdev4", 00:24:19.294 "aliases": [ 00:24:19.294 "36a341a2-0b11-448b-a36f-6b4a9e13cd23" 00:24:19.294 ], 00:24:19.294 "product_name": "Malloc disk", 00:24:19.294 "block_size": 512, 00:24:19.294 "num_blocks": 65536, 00:24:19.294 "uuid": "36a341a2-0b11-448b-a36f-6b4a9e13cd23", 00:24:19.294 "assigned_rate_limits": { 00:24:19.294 "rw_ios_per_sec": 0, 00:24:19.294 "rw_mbytes_per_sec": 0, 00:24:19.294 "r_mbytes_per_sec": 0, 00:24:19.294 "w_mbytes_per_sec": 0 00:24:19.294 }, 00:24:19.294 "claimed": false, 00:24:19.294 "zoned": false, 00:24:19.294 "supported_io_types": { 00:24:19.294 "read": true, 00:24:19.294 "write": true, 00:24:19.294 "unmap": true, 00:24:19.294 "flush": true, 00:24:19.294 "reset": true, 00:24:19.294 "nvme_admin": false, 00:24:19.294 "nvme_io": false, 00:24:19.294 "nvme_io_md": false, 00:24:19.294 "write_zeroes": true, 00:24:19.294 "zcopy": true, 00:24:19.294 "get_zone_info": false, 00:24:19.294 "zone_management": false, 00:24:19.294 "zone_append": false, 00:24:19.294 "compare": false, 00:24:19.294 "compare_and_write": false, 00:24:19.294 "abort": true, 00:24:19.294 "seek_hole": false, 00:24:19.294 "seek_data": false, 00:24:19.294 "copy": true, 00:24:19.294 "nvme_iov_md": false 00:24:19.294 }, 00:24:19.294 "memory_domains": [ 00:24:19.294 { 00:24:19.294 "dma_device_id": "system", 00:24:19.294 "dma_device_type": 1 00:24:19.294 }, 00:24:19.294 { 00:24:19.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:19.294 "dma_device_type": 2 00:24:19.294 } 00:24:19.294 ], 00:24:19.294 "driver_specific": {} 00:24:19.294 } 00:24:19.294 ] 00:24:19.294 04:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:19.294 04:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:19.294 04:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:19.294 04:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:19.552 [2024-07-23 04:19:28.199478] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:19.552 [2024-07-23 04:19:28.199523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:19.552 [2024-07-23 04:19:28.199555] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:19.552 [2024-07-23 04:19:28.201850] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:19.552 [2024-07-23 04:19:28.201910] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:19.552 04:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:19.552 04:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:19.552 04:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:19.552 04:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:19.552 04:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:19.552 04:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:19.552 04:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:19.552 04:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:19.552 04:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:19.552 04:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:19.552 04:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.552 04:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:19.811 04:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:19.811 "name": "Existed_Raid", 00:24:19.811 "uuid": "4f225dae-7ce0-4696-a228-5f15589ca56f", 00:24:19.811 "strip_size_kb": 64, 00:24:19.811 "state": "configuring", 00:24:19.811 "raid_level": "concat", 00:24:19.811 "superblock": true, 00:24:19.811 "num_base_bdevs": 4, 00:24:19.811 "num_base_bdevs_discovered": 3, 00:24:19.811 "num_base_bdevs_operational": 4, 00:24:19.811 "base_bdevs_list": [ 00:24:19.811 { 00:24:19.811 "name": "BaseBdev1", 00:24:19.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.811 "is_configured": false, 00:24:19.811 "data_offset": 0, 00:24:19.811 "data_size": 0 00:24:19.811 }, 00:24:19.811 { 00:24:19.811 "name": "BaseBdev2", 00:24:19.811 "uuid": "7b27a0e0-3f44-4567-bf81-f420e20a142e", 00:24:19.811 "is_configured": true, 00:24:19.811 "data_offset": 2048, 00:24:19.811 "data_size": 63488 00:24:19.811 }, 00:24:19.811 { 00:24:19.811 "name": "BaseBdev3", 00:24:19.811 "uuid": "3c4f317c-0521-41f4-b1c4-308796955fb0", 00:24:19.811 "is_configured": true, 00:24:19.811 "data_offset": 2048, 00:24:19.811 "data_size": 63488 00:24:19.811 }, 00:24:19.811 { 00:24:19.811 "name": "BaseBdev4", 00:24:19.811 "uuid": "36a341a2-0b11-448b-a36f-6b4a9e13cd23", 00:24:19.811 "is_configured": true, 00:24:19.811 "data_offset": 2048, 00:24:19.811 "data_size": 63488 00:24:19.811 } 00:24:19.811 ] 00:24:19.811 }' 00:24:19.811 04:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:19.811 04:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.377 04:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:20.377 [2024-07-23 04:19:29.129916] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:20.377 04:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:20.377 04:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:20.377 04:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:20.377 04:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:20.377 04:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:20.377 04:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:20.377 04:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:20.377 04:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:20.377 04:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:20.377 04:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:20.377 04:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.377 04:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:20.635 04:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:20.635 "name": "Existed_Raid", 00:24:20.635 "uuid": "4f225dae-7ce0-4696-a228-5f15589ca56f", 00:24:20.636 "strip_size_kb": 64, 00:24:20.636 "state": "configuring", 00:24:20.636 "raid_level": "concat", 00:24:20.636 "superblock": true, 00:24:20.636 "num_base_bdevs": 4, 00:24:20.636 "num_base_bdevs_discovered": 2, 00:24:20.636 "num_base_bdevs_operational": 4, 00:24:20.636 "base_bdevs_list": [ 00:24:20.636 { 00:24:20.636 "name": "BaseBdev1", 00:24:20.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.636 "is_configured": false, 00:24:20.636 "data_offset": 0, 00:24:20.636 "data_size": 0 00:24:20.636 }, 00:24:20.636 { 00:24:20.636 "name": null, 00:24:20.636 "uuid": "7b27a0e0-3f44-4567-bf81-f420e20a142e", 00:24:20.636 "is_configured": false, 00:24:20.636 "data_offset": 2048, 00:24:20.636 "data_size": 63488 00:24:20.636 }, 00:24:20.636 { 00:24:20.636 "name": "BaseBdev3", 00:24:20.636 "uuid": "3c4f317c-0521-41f4-b1c4-308796955fb0", 00:24:20.636 "is_configured": true, 00:24:20.636 "data_offset": 2048, 00:24:20.636 "data_size": 63488 00:24:20.636 }, 00:24:20.636 { 00:24:20.636 "name": "BaseBdev4", 00:24:20.636 "uuid": "36a341a2-0b11-448b-a36f-6b4a9e13cd23", 00:24:20.636 "is_configured": true, 00:24:20.636 "data_offset": 2048, 00:24:20.636 "data_size": 63488 00:24:20.636 } 00:24:20.636 ] 00:24:20.636 }' 00:24:20.636 04:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:20.636 04:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.570 04:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.570 04:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:21.570 04:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:24:21.570 04:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:21.828 [2024-07-23 04:19:30.465571] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:21.828 BaseBdev1 00:24:21.828 04:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:24:21.828 04:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:21.828 04:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:21.828 04:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:21.829 04:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:21.829 04:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:21.829 04:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:22.087 04:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:22.087 [ 00:24:22.087 { 00:24:22.087 "name": "BaseBdev1", 00:24:22.087 "aliases": [ 00:24:22.087 "4a74fa40-24bc-4030-8e13-50c9c2abb814" 00:24:22.087 ], 00:24:22.087 "product_name": "Malloc disk", 00:24:22.087 "block_size": 512, 00:24:22.087 "num_blocks": 65536, 00:24:22.087 "uuid": "4a74fa40-24bc-4030-8e13-50c9c2abb814", 00:24:22.087 "assigned_rate_limits": { 00:24:22.087 "rw_ios_per_sec": 0, 00:24:22.087 "rw_mbytes_per_sec": 0, 00:24:22.087 "r_mbytes_per_sec": 0, 00:24:22.087 "w_mbytes_per_sec": 0 00:24:22.087 }, 00:24:22.087 "claimed": true, 00:24:22.087 "claim_type": "exclusive_write", 00:24:22.087 "zoned": false, 00:24:22.087 "supported_io_types": { 00:24:22.087 "read": true, 00:24:22.087 "write": true, 00:24:22.087 "unmap": true, 00:24:22.087 "flush": true, 00:24:22.087 "reset": true, 00:24:22.087 "nvme_admin": false, 00:24:22.087 "nvme_io": false, 00:24:22.087 "nvme_io_md": false, 00:24:22.087 "write_zeroes": true, 00:24:22.087 "zcopy": true, 00:24:22.087 "get_zone_info": false, 00:24:22.087 "zone_management": false, 00:24:22.087 "zone_append": false, 00:24:22.087 "compare": false, 00:24:22.087 "compare_and_write": false, 00:24:22.087 "abort": true, 00:24:22.087 "seek_hole": false, 00:24:22.087 "seek_data": false, 00:24:22.087 "copy": true, 00:24:22.087 "nvme_iov_md": false 00:24:22.087 }, 00:24:22.087 "memory_domains": [ 00:24:22.087 { 00:24:22.087 "dma_device_id": "system", 00:24:22.087 "dma_device_type": 1 00:24:22.087 }, 00:24:22.087 { 00:24:22.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.087 "dma_device_type": 2 00:24:22.087 } 00:24:22.087 ], 00:24:22.087 "driver_specific": {} 00:24:22.087 } 00:24:22.087 ] 00:24:22.087 04:19:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:22.087 04:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:22.087 04:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:22.087 04:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:22.087 04:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:22.087 04:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:22.087 04:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:22.087 04:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:22.087 04:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:22.087 04:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:22.087 04:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:22.087 04:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.087 04:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:22.380 04:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:22.380 "name": "Existed_Raid", 00:24:22.380 "uuid": "4f225dae-7ce0-4696-a228-5f15589ca56f", 00:24:22.380 "strip_size_kb": 64, 00:24:22.380 "state": "configuring", 00:24:22.380 "raid_level": "concat", 00:24:22.380 "superblock": true, 00:24:22.380 "num_base_bdevs": 4, 00:24:22.380 "num_base_bdevs_discovered": 3, 00:24:22.380 "num_base_bdevs_operational": 4, 00:24:22.380 "base_bdevs_list": [ 00:24:22.380 { 00:24:22.380 "name": "BaseBdev1", 00:24:22.380 "uuid": "4a74fa40-24bc-4030-8e13-50c9c2abb814", 00:24:22.380 "is_configured": true, 00:24:22.380 "data_offset": 2048, 00:24:22.380 "data_size": 63488 00:24:22.380 }, 00:24:22.380 { 00:24:22.380 "name": null, 00:24:22.380 "uuid": "7b27a0e0-3f44-4567-bf81-f420e20a142e", 00:24:22.380 "is_configured": false, 00:24:22.380 "data_offset": 2048, 00:24:22.380 "data_size": 63488 00:24:22.380 }, 00:24:22.380 { 00:24:22.380 "name": "BaseBdev3", 00:24:22.380 "uuid": "3c4f317c-0521-41f4-b1c4-308796955fb0", 00:24:22.380 "is_configured": true, 00:24:22.380 "data_offset": 2048, 00:24:22.380 "data_size": 63488 00:24:22.380 }, 00:24:22.380 { 00:24:22.380 "name": "BaseBdev4", 00:24:22.380 "uuid": "36a341a2-0b11-448b-a36f-6b4a9e13cd23", 00:24:22.380 "is_configured": true, 00:24:22.380 "data_offset": 2048, 00:24:22.380 "data_size": 63488 00:24:22.380 } 00:24:22.380 ] 00:24:22.380 }' 00:24:22.380 04:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:22.380 04:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.315 04:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:23.315 04:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.574 04:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:24:23.574 04:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:23.574 [2024-07-23 04:19:32.298624] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:23.574 04:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:23.574 04:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:23.574 04:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:23.574 04:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:23.574 04:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:23.574 04:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:23.574 04:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:23.574 04:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:23.574 04:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:23.574 04:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:23.574 04:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.574 04:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:23.833 04:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:23.833 "name": "Existed_Raid", 00:24:23.833 "uuid": "4f225dae-7ce0-4696-a228-5f15589ca56f", 00:24:23.833 "strip_size_kb": 64, 00:24:23.833 "state": "configuring", 00:24:23.833 "raid_level": "concat", 00:24:23.833 "superblock": true, 00:24:23.833 "num_base_bdevs": 4, 00:24:23.833 "num_base_bdevs_discovered": 2, 00:24:23.833 "num_base_bdevs_operational": 4, 00:24:23.833 "base_bdevs_list": [ 00:24:23.833 { 00:24:23.833 "name": "BaseBdev1", 00:24:23.833 "uuid": "4a74fa40-24bc-4030-8e13-50c9c2abb814", 00:24:23.833 "is_configured": true, 00:24:23.833 "data_offset": 2048, 00:24:23.833 "data_size": 63488 00:24:23.833 }, 00:24:23.833 { 00:24:23.833 "name": null, 00:24:23.833 "uuid": "7b27a0e0-3f44-4567-bf81-f420e20a142e", 00:24:23.833 "is_configured": false, 00:24:23.833 "data_offset": 2048, 00:24:23.833 "data_size": 63488 00:24:23.833 }, 00:24:23.833 { 00:24:23.833 "name": null, 00:24:23.833 "uuid": "3c4f317c-0521-41f4-b1c4-308796955fb0", 00:24:23.833 "is_configured": false, 00:24:23.833 "data_offset": 2048, 00:24:23.833 "data_size": 63488 00:24:23.833 }, 00:24:23.833 { 00:24:23.833 "name": "BaseBdev4", 00:24:23.833 "uuid": "36a341a2-0b11-448b-a36f-6b4a9e13cd23", 00:24:23.833 "is_configured": true, 00:24:23.833 "data_offset": 2048, 00:24:23.833 "data_size": 63488 00:24:23.833 } 00:24:23.833 ] 00:24:23.833 }' 00:24:23.833 04:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:23.833 04:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.401 04:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.401 04:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:24.969 04:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:24:24.969 04:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:24.969 [2024-07-23 04:19:33.738514] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:24.969 04:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:24.969 04:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:24.969 04:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:24.969 04:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:24.969 04:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:24.969 04:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:24.969 04:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:24.969 04:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:25.229 04:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:25.229 04:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:25.229 04:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.229 04:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:25.229 04:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:25.229 "name": "Existed_Raid", 00:24:25.229 "uuid": "4f225dae-7ce0-4696-a228-5f15589ca56f", 00:24:25.229 "strip_size_kb": 64, 00:24:25.229 "state": "configuring", 00:24:25.229 "raid_level": "concat", 00:24:25.229 "superblock": true, 00:24:25.229 "num_base_bdevs": 4, 00:24:25.229 "num_base_bdevs_discovered": 3, 00:24:25.229 "num_base_bdevs_operational": 4, 00:24:25.229 "base_bdevs_list": [ 00:24:25.229 { 00:24:25.229 "name": "BaseBdev1", 00:24:25.229 "uuid": "4a74fa40-24bc-4030-8e13-50c9c2abb814", 00:24:25.229 "is_configured": true, 00:24:25.229 "data_offset": 2048, 00:24:25.229 "data_size": 63488 00:24:25.229 }, 00:24:25.229 { 00:24:25.229 "name": null, 00:24:25.229 "uuid": "7b27a0e0-3f44-4567-bf81-f420e20a142e", 00:24:25.229 "is_configured": false, 00:24:25.229 "data_offset": 2048, 00:24:25.229 "data_size": 63488 00:24:25.229 }, 00:24:25.229 { 00:24:25.229 "name": "BaseBdev3", 00:24:25.229 "uuid": "3c4f317c-0521-41f4-b1c4-308796955fb0", 00:24:25.229 "is_configured": true, 00:24:25.229 "data_offset": 2048, 00:24:25.229 "data_size": 63488 00:24:25.229 }, 00:24:25.229 { 00:24:25.229 "name": "BaseBdev4", 00:24:25.229 "uuid": "36a341a2-0b11-448b-a36f-6b4a9e13cd23", 00:24:25.229 "is_configured": true, 00:24:25.229 "data_offset": 2048, 00:24:25.229 "data_size": 63488 00:24:25.229 } 00:24:25.229 ] 00:24:25.229 }' 00:24:25.229 04:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:25.229 04:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.797 04:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.797 04:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:26.057 04:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:24:26.057 04:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:26.316 [2024-07-23 04:19:34.909748] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:26.316 04:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:26.316 04:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:26.316 04:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:26.316 04:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:26.316 04:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:26.316 04:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:26.316 04:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:26.316 04:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:26.316 04:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:26.316 04:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:26.316 04:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.316 04:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:26.575 04:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:26.575 "name": "Existed_Raid", 00:24:26.575 "uuid": "4f225dae-7ce0-4696-a228-5f15589ca56f", 00:24:26.575 "strip_size_kb": 64, 00:24:26.575 "state": "configuring", 00:24:26.575 "raid_level": "concat", 00:24:26.575 "superblock": true, 00:24:26.575 "num_base_bdevs": 4, 00:24:26.575 "num_base_bdevs_discovered": 2, 00:24:26.575 "num_base_bdevs_operational": 4, 00:24:26.575 "base_bdevs_list": [ 00:24:26.575 { 00:24:26.575 "name": null, 00:24:26.575 "uuid": "4a74fa40-24bc-4030-8e13-50c9c2abb814", 00:24:26.575 "is_configured": false, 00:24:26.575 "data_offset": 2048, 00:24:26.576 "data_size": 63488 00:24:26.576 }, 00:24:26.576 { 00:24:26.576 "name": null, 00:24:26.576 "uuid": "7b27a0e0-3f44-4567-bf81-f420e20a142e", 00:24:26.576 "is_configured": false, 00:24:26.576 "data_offset": 2048, 00:24:26.576 "data_size": 63488 00:24:26.576 }, 00:24:26.576 { 00:24:26.576 "name": "BaseBdev3", 00:24:26.576 "uuid": "3c4f317c-0521-41f4-b1c4-308796955fb0", 00:24:26.576 "is_configured": true, 00:24:26.576 "data_offset": 2048, 00:24:26.576 "data_size": 63488 00:24:26.576 }, 00:24:26.576 { 00:24:26.576 "name": "BaseBdev4", 00:24:26.576 "uuid": "36a341a2-0b11-448b-a36f-6b4a9e13cd23", 00:24:26.576 "is_configured": true, 00:24:26.576 "data_offset": 2048, 00:24:26.576 "data_size": 63488 00:24:26.576 } 00:24:26.576 ] 00:24:26.576 }' 00:24:26.576 04:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:26.576 04:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.143 04:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.143 04:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:27.402 04:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:24:27.402 04:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:27.662 [2024-07-23 04:19:36.251264] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:27.662 04:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:27.662 04:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:27.662 04:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:27.662 04:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:27.662 04:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:27.662 04:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:27.662 04:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:27.662 04:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:27.662 04:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:27.662 04:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:27.662 04:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.662 04:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:27.921 04:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:27.921 "name": "Existed_Raid", 00:24:27.921 "uuid": "4f225dae-7ce0-4696-a228-5f15589ca56f", 00:24:27.921 "strip_size_kb": 64, 00:24:27.921 "state": "configuring", 00:24:27.921 "raid_level": "concat", 00:24:27.921 "superblock": true, 00:24:27.921 "num_base_bdevs": 4, 00:24:27.921 "num_base_bdevs_discovered": 3, 00:24:27.921 "num_base_bdevs_operational": 4, 00:24:27.921 "base_bdevs_list": [ 00:24:27.921 { 00:24:27.921 "name": null, 00:24:27.921 "uuid": "4a74fa40-24bc-4030-8e13-50c9c2abb814", 00:24:27.921 "is_configured": false, 00:24:27.921 "data_offset": 2048, 00:24:27.921 "data_size": 63488 00:24:27.921 }, 00:24:27.921 { 00:24:27.921 "name": "BaseBdev2", 00:24:27.921 "uuid": "7b27a0e0-3f44-4567-bf81-f420e20a142e", 00:24:27.921 "is_configured": true, 00:24:27.921 "data_offset": 2048, 00:24:27.921 "data_size": 63488 00:24:27.921 }, 00:24:27.921 { 00:24:27.921 "name": "BaseBdev3", 00:24:27.921 "uuid": "3c4f317c-0521-41f4-b1c4-308796955fb0", 00:24:27.921 "is_configured": true, 00:24:27.921 "data_offset": 2048, 00:24:27.921 "data_size": 63488 00:24:27.921 }, 00:24:27.921 { 00:24:27.921 "name": "BaseBdev4", 00:24:27.921 "uuid": "36a341a2-0b11-448b-a36f-6b4a9e13cd23", 00:24:27.921 "is_configured": true, 00:24:27.921 "data_offset": 2048, 00:24:27.921 "data_size": 63488 00:24:27.921 } 00:24:27.921 ] 00:24:27.921 }' 00:24:27.921 04:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:27.921 04:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.490 04:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.490 04:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:28.749 04:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:24:28.749 04:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.749 04:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:28.749 04:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 4a74fa40-24bc-4030-8e13-50c9c2abb814 00:24:29.008 [2024-07-23 04:19:37.757576] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:29.008 [2024-07-23 04:19:37.757837] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:24:29.008 [2024-07-23 04:19:37.757856] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:29.008 [2024-07-23 04:19:37.758255] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:24:29.008 [2024-07-23 04:19:37.758462] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:24:29.008 [2024-07-23 04:19:37.758480] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000042080 00:24:29.008 [2024-07-23 04:19:37.758655] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:29.008 NewBaseBdev 00:24:29.008 04:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:24:29.008 04:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:24:29.008 04:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:29.008 04:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:29.008 04:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:29.008 04:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:29.008 04:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:29.267 04:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:29.529 [ 00:24:29.529 { 00:24:29.529 "name": "NewBaseBdev", 00:24:29.529 "aliases": [ 00:24:29.529 "4a74fa40-24bc-4030-8e13-50c9c2abb814" 00:24:29.529 ], 00:24:29.529 "product_name": "Malloc disk", 00:24:29.529 "block_size": 512, 00:24:29.529 "num_blocks": 65536, 00:24:29.529 "uuid": "4a74fa40-24bc-4030-8e13-50c9c2abb814", 00:24:29.529 "assigned_rate_limits": { 00:24:29.529 "rw_ios_per_sec": 0, 00:24:29.529 "rw_mbytes_per_sec": 0, 00:24:29.529 "r_mbytes_per_sec": 0, 00:24:29.529 "w_mbytes_per_sec": 0 00:24:29.529 }, 00:24:29.529 "claimed": true, 00:24:29.529 "claim_type": "exclusive_write", 00:24:29.529 "zoned": false, 00:24:29.529 "supported_io_types": { 00:24:29.529 "read": true, 00:24:29.530 "write": true, 00:24:29.530 "unmap": true, 00:24:29.530 "flush": true, 00:24:29.530 "reset": true, 00:24:29.530 "nvme_admin": false, 00:24:29.530 "nvme_io": false, 00:24:29.530 "nvme_io_md": false, 00:24:29.530 "write_zeroes": true, 00:24:29.530 "zcopy": true, 00:24:29.530 "get_zone_info": false, 00:24:29.530 "zone_management": false, 00:24:29.530 "zone_append": false, 00:24:29.530 "compare": false, 00:24:29.530 "compare_and_write": false, 00:24:29.530 "abort": true, 00:24:29.530 "seek_hole": false, 00:24:29.530 "seek_data": false, 00:24:29.530 "copy": true, 00:24:29.530 "nvme_iov_md": false 00:24:29.530 }, 00:24:29.530 "memory_domains": [ 00:24:29.530 { 00:24:29.530 "dma_device_id": "system", 00:24:29.530 "dma_device_type": 1 00:24:29.530 }, 00:24:29.530 { 00:24:29.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.530 "dma_device_type": 2 00:24:29.530 } 00:24:29.530 ], 00:24:29.530 "driver_specific": {} 00:24:29.530 } 00:24:29.530 ] 00:24:29.530 04:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:29.530 04:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:24:29.530 04:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:29.530 04:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:29.530 04:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:29.530 04:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:29.530 04:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:29.530 04:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:29.530 04:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:29.530 04:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:29.530 04:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:29.530 04:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.530 04:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:29.790 04:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:29.790 "name": "Existed_Raid", 00:24:29.790 "uuid": "4f225dae-7ce0-4696-a228-5f15589ca56f", 00:24:29.790 "strip_size_kb": 64, 00:24:29.790 "state": "online", 00:24:29.790 "raid_level": "concat", 00:24:29.790 "superblock": true, 00:24:29.790 "num_base_bdevs": 4, 00:24:29.790 "num_base_bdevs_discovered": 4, 00:24:29.790 "num_base_bdevs_operational": 4, 00:24:29.790 "base_bdevs_list": [ 00:24:29.790 { 00:24:29.790 "name": "NewBaseBdev", 00:24:29.790 "uuid": "4a74fa40-24bc-4030-8e13-50c9c2abb814", 00:24:29.790 "is_configured": true, 00:24:29.790 "data_offset": 2048, 00:24:29.790 "data_size": 63488 00:24:29.790 }, 00:24:29.790 { 00:24:29.790 "name": "BaseBdev2", 00:24:29.790 "uuid": "7b27a0e0-3f44-4567-bf81-f420e20a142e", 00:24:29.790 "is_configured": true, 00:24:29.790 "data_offset": 2048, 00:24:29.790 "data_size": 63488 00:24:29.790 }, 00:24:29.790 { 00:24:29.790 "name": "BaseBdev3", 00:24:29.790 "uuid": "3c4f317c-0521-41f4-b1c4-308796955fb0", 00:24:29.790 "is_configured": true, 00:24:29.790 "data_offset": 2048, 00:24:29.790 "data_size": 63488 00:24:29.790 }, 00:24:29.790 { 00:24:29.790 "name": "BaseBdev4", 00:24:29.790 "uuid": "36a341a2-0b11-448b-a36f-6b4a9e13cd23", 00:24:29.790 "is_configured": true, 00:24:29.790 "data_offset": 2048, 00:24:29.790 "data_size": 63488 00:24:29.790 } 00:24:29.790 ] 00:24:29.790 }' 00:24:29.790 04:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:29.790 04:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.358 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:24:30.358 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:30.358 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:30.358 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:30.358 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:30.358 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:24:30.358 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:30.358 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:30.618 [2024-07-23 04:19:39.254122] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:30.618 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:30.618 "name": "Existed_Raid", 00:24:30.618 "aliases": [ 00:24:30.618 "4f225dae-7ce0-4696-a228-5f15589ca56f" 00:24:30.618 ], 00:24:30.618 "product_name": "Raid Volume", 00:24:30.618 "block_size": 512, 00:24:30.618 "num_blocks": 253952, 00:24:30.618 "uuid": "4f225dae-7ce0-4696-a228-5f15589ca56f", 00:24:30.618 "assigned_rate_limits": { 00:24:30.618 "rw_ios_per_sec": 0, 00:24:30.618 "rw_mbytes_per_sec": 0, 00:24:30.618 "r_mbytes_per_sec": 0, 00:24:30.618 "w_mbytes_per_sec": 0 00:24:30.618 }, 00:24:30.618 "claimed": false, 00:24:30.618 "zoned": false, 00:24:30.618 "supported_io_types": { 00:24:30.618 "read": true, 00:24:30.618 "write": true, 00:24:30.618 "unmap": true, 00:24:30.618 "flush": true, 00:24:30.618 "reset": true, 00:24:30.618 "nvme_admin": false, 00:24:30.618 "nvme_io": false, 00:24:30.618 "nvme_io_md": false, 00:24:30.618 "write_zeroes": true, 00:24:30.618 "zcopy": false, 00:24:30.618 "get_zone_info": false, 00:24:30.618 "zone_management": false, 00:24:30.618 "zone_append": false, 00:24:30.618 "compare": false, 00:24:30.618 "compare_and_write": false, 00:24:30.618 "abort": false, 00:24:30.618 "seek_hole": false, 00:24:30.618 "seek_data": false, 00:24:30.618 "copy": false, 00:24:30.618 "nvme_iov_md": false 00:24:30.618 }, 00:24:30.618 "memory_domains": [ 00:24:30.618 { 00:24:30.618 "dma_device_id": "system", 00:24:30.618 "dma_device_type": 1 00:24:30.618 }, 00:24:30.618 { 00:24:30.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.618 "dma_device_type": 2 00:24:30.618 }, 00:24:30.618 { 00:24:30.618 "dma_device_id": "system", 00:24:30.618 "dma_device_type": 1 00:24:30.618 }, 00:24:30.618 { 00:24:30.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.618 "dma_device_type": 2 00:24:30.618 }, 00:24:30.618 { 00:24:30.618 "dma_device_id": "system", 00:24:30.618 "dma_device_type": 1 00:24:30.618 }, 00:24:30.618 { 00:24:30.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.618 "dma_device_type": 2 00:24:30.618 }, 00:24:30.618 { 00:24:30.618 "dma_device_id": "system", 00:24:30.618 "dma_device_type": 1 00:24:30.618 }, 00:24:30.618 { 00:24:30.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.618 "dma_device_type": 2 00:24:30.618 } 00:24:30.618 ], 00:24:30.618 "driver_specific": { 00:24:30.618 "raid": { 00:24:30.618 "uuid": "4f225dae-7ce0-4696-a228-5f15589ca56f", 00:24:30.618 "strip_size_kb": 64, 00:24:30.618 "state": "online", 00:24:30.618 "raid_level": "concat", 00:24:30.618 "superblock": true, 00:24:30.618 "num_base_bdevs": 4, 00:24:30.618 "num_base_bdevs_discovered": 4, 00:24:30.618 "num_base_bdevs_operational": 4, 00:24:30.618 "base_bdevs_list": [ 00:24:30.618 { 00:24:30.618 "name": "NewBaseBdev", 00:24:30.618 "uuid": "4a74fa40-24bc-4030-8e13-50c9c2abb814", 00:24:30.618 "is_configured": true, 00:24:30.618 "data_offset": 2048, 00:24:30.618 "data_size": 63488 00:24:30.618 }, 00:24:30.618 { 00:24:30.618 "name": "BaseBdev2", 00:24:30.618 "uuid": "7b27a0e0-3f44-4567-bf81-f420e20a142e", 00:24:30.618 "is_configured": true, 00:24:30.618 "data_offset": 2048, 00:24:30.618 "data_size": 63488 00:24:30.618 }, 00:24:30.618 { 00:24:30.618 "name": "BaseBdev3", 00:24:30.618 "uuid": "3c4f317c-0521-41f4-b1c4-308796955fb0", 00:24:30.618 "is_configured": true, 00:24:30.618 "data_offset": 2048, 00:24:30.618 "data_size": 63488 00:24:30.618 }, 00:24:30.618 { 00:24:30.618 "name": "BaseBdev4", 00:24:30.618 "uuid": "36a341a2-0b11-448b-a36f-6b4a9e13cd23", 00:24:30.618 "is_configured": true, 00:24:30.618 "data_offset": 2048, 00:24:30.618 "data_size": 63488 00:24:30.618 } 00:24:30.618 ] 00:24:30.618 } 00:24:30.618 } 00:24:30.618 }' 00:24:30.618 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:30.618 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:24:30.618 BaseBdev2 00:24:30.618 BaseBdev3 00:24:30.618 BaseBdev4' 00:24:30.618 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:30.618 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:30.618 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:30.878 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:30.878 "name": "NewBaseBdev", 00:24:30.878 "aliases": [ 00:24:30.878 "4a74fa40-24bc-4030-8e13-50c9c2abb814" 00:24:30.878 ], 00:24:30.878 "product_name": "Malloc disk", 00:24:30.878 "block_size": 512, 00:24:30.878 "num_blocks": 65536, 00:24:30.878 "uuid": "4a74fa40-24bc-4030-8e13-50c9c2abb814", 00:24:30.878 "assigned_rate_limits": { 00:24:30.878 "rw_ios_per_sec": 0, 00:24:30.878 "rw_mbytes_per_sec": 0, 00:24:30.878 "r_mbytes_per_sec": 0, 00:24:30.878 "w_mbytes_per_sec": 0 00:24:30.878 }, 00:24:30.878 "claimed": true, 00:24:30.878 "claim_type": "exclusive_write", 00:24:30.878 "zoned": false, 00:24:30.878 "supported_io_types": { 00:24:30.878 "read": true, 00:24:30.878 "write": true, 00:24:30.878 "unmap": true, 00:24:30.878 "flush": true, 00:24:30.878 "reset": true, 00:24:30.878 "nvme_admin": false, 00:24:30.878 "nvme_io": false, 00:24:30.878 "nvme_io_md": false, 00:24:30.878 "write_zeroes": true, 00:24:30.878 "zcopy": true, 00:24:30.878 "get_zone_info": false, 00:24:30.878 "zone_management": false, 00:24:30.878 "zone_append": false, 00:24:30.878 "compare": false, 00:24:30.878 "compare_and_write": false, 00:24:30.878 "abort": true, 00:24:30.878 "seek_hole": false, 00:24:30.878 "seek_data": false, 00:24:30.878 "copy": true, 00:24:30.878 "nvme_iov_md": false 00:24:30.878 }, 00:24:30.878 "memory_domains": [ 00:24:30.878 { 00:24:30.878 "dma_device_id": "system", 00:24:30.878 "dma_device_type": 1 00:24:30.878 }, 00:24:30.878 { 00:24:30.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.878 "dma_device_type": 2 00:24:30.878 } 00:24:30.878 ], 00:24:30.878 "driver_specific": {} 00:24:30.878 }' 00:24:30.878 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:30.878 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:30.878 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:30.878 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:31.137 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:31.137 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:31.137 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:31.137 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:31.137 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:31.137 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:31.137 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:31.137 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:31.137 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:31.137 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:31.137 04:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:31.396 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:31.396 "name": "BaseBdev2", 00:24:31.396 "aliases": [ 00:24:31.396 "7b27a0e0-3f44-4567-bf81-f420e20a142e" 00:24:31.396 ], 00:24:31.396 "product_name": "Malloc disk", 00:24:31.396 "block_size": 512, 00:24:31.396 "num_blocks": 65536, 00:24:31.396 "uuid": "7b27a0e0-3f44-4567-bf81-f420e20a142e", 00:24:31.396 "assigned_rate_limits": { 00:24:31.396 "rw_ios_per_sec": 0, 00:24:31.396 "rw_mbytes_per_sec": 0, 00:24:31.396 "r_mbytes_per_sec": 0, 00:24:31.396 "w_mbytes_per_sec": 0 00:24:31.396 }, 00:24:31.396 "claimed": true, 00:24:31.396 "claim_type": "exclusive_write", 00:24:31.396 "zoned": false, 00:24:31.396 "supported_io_types": { 00:24:31.396 "read": true, 00:24:31.396 "write": true, 00:24:31.396 "unmap": true, 00:24:31.396 "flush": true, 00:24:31.396 "reset": true, 00:24:31.396 "nvme_admin": false, 00:24:31.396 "nvme_io": false, 00:24:31.396 "nvme_io_md": false, 00:24:31.396 "write_zeroes": true, 00:24:31.396 "zcopy": true, 00:24:31.396 "get_zone_info": false, 00:24:31.396 "zone_management": false, 00:24:31.396 "zone_append": false, 00:24:31.396 "compare": false, 00:24:31.396 "compare_and_write": false, 00:24:31.396 "abort": true, 00:24:31.396 "seek_hole": false, 00:24:31.396 "seek_data": false, 00:24:31.396 "copy": true, 00:24:31.396 "nvme_iov_md": false 00:24:31.396 }, 00:24:31.396 "memory_domains": [ 00:24:31.396 { 00:24:31.396 "dma_device_id": "system", 00:24:31.396 "dma_device_type": 1 00:24:31.396 }, 00:24:31.396 { 00:24:31.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:31.396 "dma_device_type": 2 00:24:31.396 } 00:24:31.396 ], 00:24:31.396 "driver_specific": {} 00:24:31.396 }' 00:24:31.396 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:31.396 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:31.655 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:31.655 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:31.655 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:31.655 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:31.655 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:31.655 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:31.655 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:31.655 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:31.655 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:31.914 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:31.914 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:31.914 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:31.914 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:31.914 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:31.914 "name": "BaseBdev3", 00:24:31.914 "aliases": [ 00:24:31.914 "3c4f317c-0521-41f4-b1c4-308796955fb0" 00:24:31.914 ], 00:24:31.914 "product_name": "Malloc disk", 00:24:31.914 "block_size": 512, 00:24:31.914 "num_blocks": 65536, 00:24:31.914 "uuid": "3c4f317c-0521-41f4-b1c4-308796955fb0", 00:24:31.914 "assigned_rate_limits": { 00:24:31.914 "rw_ios_per_sec": 0, 00:24:31.914 "rw_mbytes_per_sec": 0, 00:24:31.914 "r_mbytes_per_sec": 0, 00:24:31.914 "w_mbytes_per_sec": 0 00:24:31.914 }, 00:24:31.914 "claimed": true, 00:24:31.914 "claim_type": "exclusive_write", 00:24:31.914 "zoned": false, 00:24:31.914 "supported_io_types": { 00:24:31.914 "read": true, 00:24:31.914 "write": true, 00:24:31.914 "unmap": true, 00:24:31.914 "flush": true, 00:24:31.914 "reset": true, 00:24:31.914 "nvme_admin": false, 00:24:31.914 "nvme_io": false, 00:24:31.914 "nvme_io_md": false, 00:24:31.914 "write_zeroes": true, 00:24:31.914 "zcopy": true, 00:24:31.914 "get_zone_info": false, 00:24:31.914 "zone_management": false, 00:24:31.914 "zone_append": false, 00:24:31.914 "compare": false, 00:24:31.914 "compare_and_write": false, 00:24:31.914 "abort": true, 00:24:31.914 "seek_hole": false, 00:24:31.914 "seek_data": false, 00:24:31.914 "copy": true, 00:24:31.914 "nvme_iov_md": false 00:24:31.914 }, 00:24:31.914 "memory_domains": [ 00:24:31.914 { 00:24:31.914 "dma_device_id": "system", 00:24:31.914 "dma_device_type": 1 00:24:31.914 }, 00:24:31.914 { 00:24:31.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:31.914 "dma_device_type": 2 00:24:31.914 } 00:24:31.914 ], 00:24:31.914 "driver_specific": {} 00:24:31.914 }' 00:24:31.915 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:32.174 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:32.174 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:32.174 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:32.174 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:32.174 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:32.174 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:32.174 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:32.174 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:32.174 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:32.433 04:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:32.433 04:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:32.433 04:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:32.433 04:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:32.433 04:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:32.693 04:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:32.693 "name": "BaseBdev4", 00:24:32.693 "aliases": [ 00:24:32.693 "36a341a2-0b11-448b-a36f-6b4a9e13cd23" 00:24:32.693 ], 00:24:32.693 "product_name": "Malloc disk", 00:24:32.693 "block_size": 512, 00:24:32.693 "num_blocks": 65536, 00:24:32.693 "uuid": "36a341a2-0b11-448b-a36f-6b4a9e13cd23", 00:24:32.693 "assigned_rate_limits": { 00:24:32.693 "rw_ios_per_sec": 0, 00:24:32.693 "rw_mbytes_per_sec": 0, 00:24:32.693 "r_mbytes_per_sec": 0, 00:24:32.693 "w_mbytes_per_sec": 0 00:24:32.693 }, 00:24:32.693 "claimed": true, 00:24:32.693 "claim_type": "exclusive_write", 00:24:32.693 "zoned": false, 00:24:32.693 "supported_io_types": { 00:24:32.693 "read": true, 00:24:32.693 "write": true, 00:24:32.693 "unmap": true, 00:24:32.693 "flush": true, 00:24:32.693 "reset": true, 00:24:32.693 "nvme_admin": false, 00:24:32.693 "nvme_io": false, 00:24:32.693 "nvme_io_md": false, 00:24:32.693 "write_zeroes": true, 00:24:32.693 "zcopy": true, 00:24:32.693 "get_zone_info": false, 00:24:32.693 "zone_management": false, 00:24:32.693 "zone_append": false, 00:24:32.693 "compare": false, 00:24:32.693 "compare_and_write": false, 00:24:32.693 "abort": true, 00:24:32.693 "seek_hole": false, 00:24:32.693 "seek_data": false, 00:24:32.693 "copy": true, 00:24:32.693 "nvme_iov_md": false 00:24:32.693 }, 00:24:32.693 "memory_domains": [ 00:24:32.693 { 00:24:32.693 "dma_device_id": "system", 00:24:32.693 "dma_device_type": 1 00:24:32.693 }, 00:24:32.693 { 00:24:32.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:32.693 "dma_device_type": 2 00:24:32.693 } 00:24:32.693 ], 00:24:32.693 "driver_specific": {} 00:24:32.693 }' 00:24:32.693 04:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:32.694 04:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:32.694 04:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:32.694 04:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:32.694 04:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:32.694 04:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:32.694 04:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:32.953 04:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:32.953 04:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:32.953 04:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:32.953 04:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:32.953 04:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:32.953 04:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:33.213 [2024-07-23 04:19:41.816679] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:33.213 [2024-07-23 04:19:41.816715] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:33.213 [2024-07-23 04:19:41.816798] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:33.213 [2024-07-23 04:19:41.816879] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:33.213 [2024-07-23 04:19:41.816900] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name Existed_Raid, state offline 00:24:33.213 04:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2725271 00:24:33.213 04:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2725271 ']' 00:24:33.213 04:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2725271 00:24:33.213 04:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:24:33.213 04:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:33.213 04:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2725271 00:24:33.213 04:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:33.213 04:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:33.213 04:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2725271' 00:24:33.213 killing process with pid 2725271 00:24:33.213 04:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2725271 00:24:33.213 [2024-07-23 04:19:41.889244] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:33.213 04:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2725271 00:24:33.781 [2024-07-23 04:19:42.333533] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:35.722 04:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:24:35.722 00:24:35.722 real 0m33.184s 00:24:35.722 user 0m57.896s 00:24:35.722 sys 0m5.708s 00:24:35.722 04:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:35.722 04:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:35.722 ************************************ 00:24:35.722 END TEST raid_state_function_test_sb 00:24:35.722 ************************************ 00:24:35.722 04:19:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:35.722 04:19:44 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:24:35.722 04:19:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:24:35.722 04:19:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:35.722 04:19:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:35.722 ************************************ 00:24:35.722 START TEST raid_superblock_test 00:24:35.722 ************************************ 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 4 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2731534 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2731534 /var/tmp/spdk-raid.sock 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2731534 ']' 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:35.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:35.722 04:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.722 [2024-07-23 04:19:44.285360] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:35.722 [2024-07-23 04:19:44.285474] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2731534 ] 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:35.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:35.722 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:35.981 [2024-07-23 04:19:44.510414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.239 [2024-07-23 04:19:44.775183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.498 [2024-07-23 04:19:45.115456] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:36.498 [2024-07-23 04:19:45.115489] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:36.758 04:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:36.758 04:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:24:36.758 04:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:24:36.758 04:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:36.758 04:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:24:36.758 04:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:24:36.758 04:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:36.758 04:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:36.758 04:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:36.758 04:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:36.758 04:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:37.017 malloc1 00:24:37.017 04:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:37.276 [2024-07-23 04:19:45.806816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:37.276 [2024-07-23 04:19:45.806877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.276 [2024-07-23 04:19:45.806906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:24:37.276 [2024-07-23 04:19:45.806922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.276 [2024-07-23 04:19:45.809671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.276 [2024-07-23 04:19:45.809707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:37.276 pt1 00:24:37.276 04:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:37.276 04:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:37.276 04:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:24:37.276 04:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:24:37.276 04:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:37.276 04:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:37.276 04:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:37.276 04:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:37.276 04:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:37.535 malloc2 00:24:37.535 04:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:37.535 [2024-07-23 04:19:46.306869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:37.535 [2024-07-23 04:19:46.306927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.535 [2024-07-23 04:19:46.306953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:24:37.535 [2024-07-23 04:19:46.306969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.535 [2024-07-23 04:19:46.309737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.535 [2024-07-23 04:19:46.309776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:37.535 pt2 00:24:37.794 04:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:37.794 04:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:37.794 04:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:24:37.794 04:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:24:37.794 04:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:37.794 04:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:37.794 04:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:37.794 04:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:37.794 04:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:24:38.053 malloc3 00:24:38.053 04:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:38.053 [2024-07-23 04:19:46.822334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:38.053 [2024-07-23 04:19:46.822397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.053 [2024-07-23 04:19:46.822429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:24:38.053 [2024-07-23 04:19:46.822445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.053 [2024-07-23 04:19:46.825180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.053 [2024-07-23 04:19:46.825214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:38.053 pt3 00:24:38.312 04:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:38.312 04:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:38.312 04:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:24:38.312 04:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:24:38.312 04:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:24:38.312 04:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:38.312 04:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:38.312 04:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:38.312 04:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:24:38.571 malloc4 00:24:38.571 04:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:38.571 [2024-07-23 04:19:47.329579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:38.571 [2024-07-23 04:19:47.329644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.571 [2024-07-23 04:19:47.329674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:24:38.571 [2024-07-23 04:19:47.329690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.571 [2024-07-23 04:19:47.332526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.571 [2024-07-23 04:19:47.332561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:38.571 pt4 00:24:38.571 04:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:38.571 04:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:38.571 04:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:24:38.830 [2024-07-23 04:19:47.554287] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:38.830 [2024-07-23 04:19:47.556617] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:38.830 [2024-07-23 04:19:47.556702] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:38.830 [2024-07-23 04:19:47.556760] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:38.830 [2024-07-23 04:19:47.556998] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:24:38.830 [2024-07-23 04:19:47.557018] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:38.830 [2024-07-23 04:19:47.557388] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:24:38.830 [2024-07-23 04:19:47.557641] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:24:38.830 [2024-07-23 04:19:47.557659] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042080 00:24:38.830 [2024-07-23 04:19:47.557849] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:38.830 04:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:38.830 04:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:38.830 04:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:38.830 04:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:38.830 04:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:38.830 04:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:38.830 04:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:38.830 04:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:38.830 04:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:38.830 04:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:38.830 04:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.830 04:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.089 04:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:39.089 "name": "raid_bdev1", 00:24:39.089 "uuid": "d257fd78-1f26-4eac-b905-132d01eec9cd", 00:24:39.089 "strip_size_kb": 64, 00:24:39.089 "state": "online", 00:24:39.089 "raid_level": "concat", 00:24:39.089 "superblock": true, 00:24:39.089 "num_base_bdevs": 4, 00:24:39.089 "num_base_bdevs_discovered": 4, 00:24:39.089 "num_base_bdevs_operational": 4, 00:24:39.089 "base_bdevs_list": [ 00:24:39.089 { 00:24:39.089 "name": "pt1", 00:24:39.089 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:39.089 "is_configured": true, 00:24:39.089 "data_offset": 2048, 00:24:39.089 "data_size": 63488 00:24:39.089 }, 00:24:39.089 { 00:24:39.089 "name": "pt2", 00:24:39.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:39.089 "is_configured": true, 00:24:39.089 "data_offset": 2048, 00:24:39.089 "data_size": 63488 00:24:39.089 }, 00:24:39.089 { 00:24:39.089 "name": "pt3", 00:24:39.089 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:39.089 "is_configured": true, 00:24:39.089 "data_offset": 2048, 00:24:39.089 "data_size": 63488 00:24:39.089 }, 00:24:39.089 { 00:24:39.089 "name": "pt4", 00:24:39.089 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:39.089 "is_configured": true, 00:24:39.089 "data_offset": 2048, 00:24:39.089 "data_size": 63488 00:24:39.089 } 00:24:39.089 ] 00:24:39.089 }' 00:24:39.089 04:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:39.089 04:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.657 04:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:24:39.657 04:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:39.657 04:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:39.657 04:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:39.657 04:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:39.657 04:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:39.657 04:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:39.657 04:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:39.916 [2024-07-23 04:19:48.585418] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:39.916 04:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:39.916 "name": "raid_bdev1", 00:24:39.916 "aliases": [ 00:24:39.916 "d257fd78-1f26-4eac-b905-132d01eec9cd" 00:24:39.916 ], 00:24:39.916 "product_name": "Raid Volume", 00:24:39.916 "block_size": 512, 00:24:39.916 "num_blocks": 253952, 00:24:39.916 "uuid": "d257fd78-1f26-4eac-b905-132d01eec9cd", 00:24:39.916 "assigned_rate_limits": { 00:24:39.916 "rw_ios_per_sec": 0, 00:24:39.916 "rw_mbytes_per_sec": 0, 00:24:39.916 "r_mbytes_per_sec": 0, 00:24:39.916 "w_mbytes_per_sec": 0 00:24:39.916 }, 00:24:39.916 "claimed": false, 00:24:39.916 "zoned": false, 00:24:39.916 "supported_io_types": { 00:24:39.916 "read": true, 00:24:39.916 "write": true, 00:24:39.916 "unmap": true, 00:24:39.916 "flush": true, 00:24:39.916 "reset": true, 00:24:39.916 "nvme_admin": false, 00:24:39.916 "nvme_io": false, 00:24:39.916 "nvme_io_md": false, 00:24:39.916 "write_zeroes": true, 00:24:39.916 "zcopy": false, 00:24:39.916 "get_zone_info": false, 00:24:39.916 "zone_management": false, 00:24:39.916 "zone_append": false, 00:24:39.916 "compare": false, 00:24:39.916 "compare_and_write": false, 00:24:39.916 "abort": false, 00:24:39.916 "seek_hole": false, 00:24:39.916 "seek_data": false, 00:24:39.916 "copy": false, 00:24:39.916 "nvme_iov_md": false 00:24:39.916 }, 00:24:39.916 "memory_domains": [ 00:24:39.916 { 00:24:39.916 "dma_device_id": "system", 00:24:39.916 "dma_device_type": 1 00:24:39.916 }, 00:24:39.916 { 00:24:39.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.916 "dma_device_type": 2 00:24:39.916 }, 00:24:39.916 { 00:24:39.916 "dma_device_id": "system", 00:24:39.916 "dma_device_type": 1 00:24:39.916 }, 00:24:39.916 { 00:24:39.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.916 "dma_device_type": 2 00:24:39.916 }, 00:24:39.916 { 00:24:39.916 "dma_device_id": "system", 00:24:39.916 "dma_device_type": 1 00:24:39.916 }, 00:24:39.916 { 00:24:39.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.916 "dma_device_type": 2 00:24:39.916 }, 00:24:39.916 { 00:24:39.916 "dma_device_id": "system", 00:24:39.916 "dma_device_type": 1 00:24:39.916 }, 00:24:39.916 { 00:24:39.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.916 "dma_device_type": 2 00:24:39.916 } 00:24:39.916 ], 00:24:39.916 "driver_specific": { 00:24:39.916 "raid": { 00:24:39.916 "uuid": "d257fd78-1f26-4eac-b905-132d01eec9cd", 00:24:39.916 "strip_size_kb": 64, 00:24:39.916 "state": "online", 00:24:39.916 "raid_level": "concat", 00:24:39.916 "superblock": true, 00:24:39.916 "num_base_bdevs": 4, 00:24:39.916 "num_base_bdevs_discovered": 4, 00:24:39.916 "num_base_bdevs_operational": 4, 00:24:39.916 "base_bdevs_list": [ 00:24:39.916 { 00:24:39.916 "name": "pt1", 00:24:39.916 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:39.916 "is_configured": true, 00:24:39.916 "data_offset": 2048, 00:24:39.916 "data_size": 63488 00:24:39.916 }, 00:24:39.916 { 00:24:39.916 "name": "pt2", 00:24:39.916 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:39.916 "is_configured": true, 00:24:39.916 "data_offset": 2048, 00:24:39.916 "data_size": 63488 00:24:39.916 }, 00:24:39.916 { 00:24:39.916 "name": "pt3", 00:24:39.916 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:39.916 "is_configured": true, 00:24:39.916 "data_offset": 2048, 00:24:39.916 "data_size": 63488 00:24:39.916 }, 00:24:39.916 { 00:24:39.916 "name": "pt4", 00:24:39.916 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:39.916 "is_configured": true, 00:24:39.916 "data_offset": 2048, 00:24:39.916 "data_size": 63488 00:24:39.916 } 00:24:39.916 ] 00:24:39.916 } 00:24:39.916 } 00:24:39.916 }' 00:24:39.916 04:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:39.916 04:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:39.916 pt2 00:24:39.916 pt3 00:24:39.916 pt4' 00:24:39.916 04:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:39.916 04:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:39.916 04:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:40.176 04:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:40.176 "name": "pt1", 00:24:40.176 "aliases": [ 00:24:40.176 "00000000-0000-0000-0000-000000000001" 00:24:40.176 ], 00:24:40.176 "product_name": "passthru", 00:24:40.176 "block_size": 512, 00:24:40.176 "num_blocks": 65536, 00:24:40.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:40.176 "assigned_rate_limits": { 00:24:40.176 "rw_ios_per_sec": 0, 00:24:40.176 "rw_mbytes_per_sec": 0, 00:24:40.176 "r_mbytes_per_sec": 0, 00:24:40.176 "w_mbytes_per_sec": 0 00:24:40.176 }, 00:24:40.176 "claimed": true, 00:24:40.176 "claim_type": "exclusive_write", 00:24:40.176 "zoned": false, 00:24:40.176 "supported_io_types": { 00:24:40.176 "read": true, 00:24:40.176 "write": true, 00:24:40.176 "unmap": true, 00:24:40.176 "flush": true, 00:24:40.176 "reset": true, 00:24:40.176 "nvme_admin": false, 00:24:40.176 "nvme_io": false, 00:24:40.176 "nvme_io_md": false, 00:24:40.176 "write_zeroes": true, 00:24:40.176 "zcopy": true, 00:24:40.176 "get_zone_info": false, 00:24:40.176 "zone_management": false, 00:24:40.176 "zone_append": false, 00:24:40.176 "compare": false, 00:24:40.176 "compare_and_write": false, 00:24:40.176 "abort": true, 00:24:40.176 "seek_hole": false, 00:24:40.176 "seek_data": false, 00:24:40.176 "copy": true, 00:24:40.176 "nvme_iov_md": false 00:24:40.176 }, 00:24:40.176 "memory_domains": [ 00:24:40.176 { 00:24:40.176 "dma_device_id": "system", 00:24:40.176 "dma_device_type": 1 00:24:40.176 }, 00:24:40.176 { 00:24:40.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:40.176 "dma_device_type": 2 00:24:40.176 } 00:24:40.176 ], 00:24:40.176 "driver_specific": { 00:24:40.176 "passthru": { 00:24:40.176 "name": "pt1", 00:24:40.176 "base_bdev_name": "malloc1" 00:24:40.176 } 00:24:40.176 } 00:24:40.176 }' 00:24:40.176 04:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:40.176 04:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:40.435 04:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:40.435 04:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:40.435 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:40.435 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:40.435 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:40.435 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:40.435 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:40.435 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:40.435 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:40.695 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:40.695 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:40.695 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:40.695 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:40.695 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:40.695 "name": "pt2", 00:24:40.695 "aliases": [ 00:24:40.695 "00000000-0000-0000-0000-000000000002" 00:24:40.695 ], 00:24:40.695 "product_name": "passthru", 00:24:40.695 "block_size": 512, 00:24:40.695 "num_blocks": 65536, 00:24:40.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:40.695 "assigned_rate_limits": { 00:24:40.695 "rw_ios_per_sec": 0, 00:24:40.695 "rw_mbytes_per_sec": 0, 00:24:40.695 "r_mbytes_per_sec": 0, 00:24:40.695 "w_mbytes_per_sec": 0 00:24:40.695 }, 00:24:40.695 "claimed": true, 00:24:40.695 "claim_type": "exclusive_write", 00:24:40.695 "zoned": false, 00:24:40.695 "supported_io_types": { 00:24:40.695 "read": true, 00:24:40.695 "write": true, 00:24:40.695 "unmap": true, 00:24:40.695 "flush": true, 00:24:40.695 "reset": true, 00:24:40.695 "nvme_admin": false, 00:24:40.695 "nvme_io": false, 00:24:40.695 "nvme_io_md": false, 00:24:40.695 "write_zeroes": true, 00:24:40.695 "zcopy": true, 00:24:40.695 "get_zone_info": false, 00:24:40.695 "zone_management": false, 00:24:40.695 "zone_append": false, 00:24:40.695 "compare": false, 00:24:40.695 "compare_and_write": false, 00:24:40.695 "abort": true, 00:24:40.695 "seek_hole": false, 00:24:40.695 "seek_data": false, 00:24:40.695 "copy": true, 00:24:40.695 "nvme_iov_md": false 00:24:40.695 }, 00:24:40.695 "memory_domains": [ 00:24:40.695 { 00:24:40.695 "dma_device_id": "system", 00:24:40.695 "dma_device_type": 1 00:24:40.695 }, 00:24:40.695 { 00:24:40.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:40.695 "dma_device_type": 2 00:24:40.695 } 00:24:40.695 ], 00:24:40.695 "driver_specific": { 00:24:40.695 "passthru": { 00:24:40.695 "name": "pt2", 00:24:40.695 "base_bdev_name": "malloc2" 00:24:40.695 } 00:24:40.695 } 00:24:40.695 }' 00:24:40.695 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:40.954 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:40.954 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:40.954 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:40.954 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:40.954 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:40.954 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:40.954 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:40.954 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:40.954 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:41.213 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:41.213 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:41.213 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:41.213 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:41.213 04:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:41.472 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:41.472 "name": "pt3", 00:24:41.472 "aliases": [ 00:24:41.472 "00000000-0000-0000-0000-000000000003" 00:24:41.472 ], 00:24:41.472 "product_name": "passthru", 00:24:41.472 "block_size": 512, 00:24:41.472 "num_blocks": 65536, 00:24:41.472 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:41.472 "assigned_rate_limits": { 00:24:41.472 "rw_ios_per_sec": 0, 00:24:41.472 "rw_mbytes_per_sec": 0, 00:24:41.472 "r_mbytes_per_sec": 0, 00:24:41.472 "w_mbytes_per_sec": 0 00:24:41.472 }, 00:24:41.472 "claimed": true, 00:24:41.472 "claim_type": "exclusive_write", 00:24:41.472 "zoned": false, 00:24:41.472 "supported_io_types": { 00:24:41.472 "read": true, 00:24:41.472 "write": true, 00:24:41.472 "unmap": true, 00:24:41.472 "flush": true, 00:24:41.472 "reset": true, 00:24:41.472 "nvme_admin": false, 00:24:41.472 "nvme_io": false, 00:24:41.472 "nvme_io_md": false, 00:24:41.472 "write_zeroes": true, 00:24:41.472 "zcopy": true, 00:24:41.473 "get_zone_info": false, 00:24:41.473 "zone_management": false, 00:24:41.473 "zone_append": false, 00:24:41.473 "compare": false, 00:24:41.473 "compare_and_write": false, 00:24:41.473 "abort": true, 00:24:41.473 "seek_hole": false, 00:24:41.473 "seek_data": false, 00:24:41.473 "copy": true, 00:24:41.473 "nvme_iov_md": false 00:24:41.473 }, 00:24:41.473 "memory_domains": [ 00:24:41.473 { 00:24:41.473 "dma_device_id": "system", 00:24:41.473 "dma_device_type": 1 00:24:41.473 }, 00:24:41.473 { 00:24:41.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:41.473 "dma_device_type": 2 00:24:41.473 } 00:24:41.473 ], 00:24:41.473 "driver_specific": { 00:24:41.473 "passthru": { 00:24:41.473 "name": "pt3", 00:24:41.473 "base_bdev_name": "malloc3" 00:24:41.473 } 00:24:41.473 } 00:24:41.473 }' 00:24:41.473 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:41.473 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:41.473 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:41.473 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:41.473 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:41.473 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:41.473 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:41.473 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:41.732 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:41.732 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:41.732 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:41.732 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:41.732 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:41.732 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:41.732 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:41.991 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:41.991 "name": "pt4", 00:24:41.991 "aliases": [ 00:24:41.991 "00000000-0000-0000-0000-000000000004" 00:24:41.991 ], 00:24:41.991 "product_name": "passthru", 00:24:41.991 "block_size": 512, 00:24:41.991 "num_blocks": 65536, 00:24:41.991 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:41.991 "assigned_rate_limits": { 00:24:41.991 "rw_ios_per_sec": 0, 00:24:41.991 "rw_mbytes_per_sec": 0, 00:24:41.991 "r_mbytes_per_sec": 0, 00:24:41.991 "w_mbytes_per_sec": 0 00:24:41.991 }, 00:24:41.991 "claimed": true, 00:24:41.991 "claim_type": "exclusive_write", 00:24:41.991 "zoned": false, 00:24:41.991 "supported_io_types": { 00:24:41.991 "read": true, 00:24:41.991 "write": true, 00:24:41.991 "unmap": true, 00:24:41.991 "flush": true, 00:24:41.991 "reset": true, 00:24:41.991 "nvme_admin": false, 00:24:41.991 "nvme_io": false, 00:24:41.991 "nvme_io_md": false, 00:24:41.991 "write_zeroes": true, 00:24:41.991 "zcopy": true, 00:24:41.991 "get_zone_info": false, 00:24:41.991 "zone_management": false, 00:24:41.991 "zone_append": false, 00:24:41.991 "compare": false, 00:24:41.991 "compare_and_write": false, 00:24:41.991 "abort": true, 00:24:41.991 "seek_hole": false, 00:24:41.991 "seek_data": false, 00:24:41.991 "copy": true, 00:24:41.991 "nvme_iov_md": false 00:24:41.991 }, 00:24:41.991 "memory_domains": [ 00:24:41.991 { 00:24:41.991 "dma_device_id": "system", 00:24:41.991 "dma_device_type": 1 00:24:41.991 }, 00:24:41.991 { 00:24:41.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:41.991 "dma_device_type": 2 00:24:41.991 } 00:24:41.991 ], 00:24:41.991 "driver_specific": { 00:24:41.991 "passthru": { 00:24:41.991 "name": "pt4", 00:24:41.991 "base_bdev_name": "malloc4" 00:24:41.991 } 00:24:41.991 } 00:24:41.991 }' 00:24:41.991 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:41.991 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:41.991 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:41.991 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:41.991 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:41.991 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:41.991 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:42.250 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:42.250 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:42.250 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:42.250 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:42.250 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:42.250 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:42.250 04:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:24:42.509 [2024-07-23 04:19:51.144355] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:42.509 04:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=d257fd78-1f26-4eac-b905-132d01eec9cd 00:24:42.509 04:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z d257fd78-1f26-4eac-b905-132d01eec9cd ']' 00:24:42.509 04:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:42.767 [2024-07-23 04:19:51.372563] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:42.767 [2024-07-23 04:19:51.372595] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:42.767 [2024-07-23 04:19:51.372685] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:42.767 [2024-07-23 04:19:51.372767] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:42.767 [2024-07-23 04:19:51.372787] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name raid_bdev1, state offline 00:24:42.767 04:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.767 04:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:24:43.026 04:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:24:43.026 04:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:24:43.027 04:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:43.027 04:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:43.286 04:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:43.286 04:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:43.286 04:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:43.286 04:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:43.545 04:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:43.545 04:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:43.804 04:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:43.804 04:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:44.063 04:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:24:44.063 04:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:44.063 04:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:24:44.063 04:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:44.063 04:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:44.063 04:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:44.063 04:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:44.063 04:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:44.063 04:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:44.063 04:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:44.063 04:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:44.063 04:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:24:44.063 04:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:44.322 [2024-07-23 04:19:52.948770] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:44.322 [2024-07-23 04:19:52.951090] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:44.322 [2024-07-23 04:19:52.951161] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:44.322 [2024-07-23 04:19:52.951211] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:24:44.322 [2024-07-23 04:19:52.951271] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:44.322 [2024-07-23 04:19:52.951325] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:44.322 [2024-07-23 04:19:52.951355] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:24:44.322 [2024-07-23 04:19:52.951385] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:24:44.322 [2024-07-23 04:19:52.951407] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:44.322 [2024-07-23 04:19:52.951424] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state configuring 00:24:44.322 request: 00:24:44.322 { 00:24:44.322 "name": "raid_bdev1", 00:24:44.322 "raid_level": "concat", 00:24:44.322 "base_bdevs": [ 00:24:44.322 "malloc1", 00:24:44.322 "malloc2", 00:24:44.322 "malloc3", 00:24:44.322 "malloc4" 00:24:44.322 ], 00:24:44.322 "strip_size_kb": 64, 00:24:44.322 "superblock": false, 00:24:44.322 "method": "bdev_raid_create", 00:24:44.322 "req_id": 1 00:24:44.322 } 00:24:44.322 Got JSON-RPC error response 00:24:44.322 response: 00:24:44.322 { 00:24:44.322 "code": -17, 00:24:44.322 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:44.322 } 00:24:44.322 04:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:24:44.322 04:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:44.322 04:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:44.322 04:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:44.322 04:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.322 04:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:24:44.581 04:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:24:44.581 04:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:24:44.581 04:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:44.839 [2024-07-23 04:19:53.405909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:44.839 [2024-07-23 04:19:53.405977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:44.839 [2024-07-23 04:19:53.406001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:24:44.839 [2024-07-23 04:19:53.406018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.839 [2024-07-23 04:19:53.408827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.839 [2024-07-23 04:19:53.408866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:44.839 [2024-07-23 04:19:53.408963] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:44.839 [2024-07-23 04:19:53.409042] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:44.839 pt1 00:24:44.839 04:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:24:44.839 04:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:44.839 04:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:44.839 04:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:44.839 04:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:44.839 04:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:44.839 04:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:44.839 04:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:44.839 04:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:44.839 04:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:44.839 04:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.839 04:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.098 04:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:45.098 "name": "raid_bdev1", 00:24:45.098 "uuid": "d257fd78-1f26-4eac-b905-132d01eec9cd", 00:24:45.098 "strip_size_kb": 64, 00:24:45.098 "state": "configuring", 00:24:45.098 "raid_level": "concat", 00:24:45.098 "superblock": true, 00:24:45.098 "num_base_bdevs": 4, 00:24:45.098 "num_base_bdevs_discovered": 1, 00:24:45.098 "num_base_bdevs_operational": 4, 00:24:45.098 "base_bdevs_list": [ 00:24:45.098 { 00:24:45.098 "name": "pt1", 00:24:45.098 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:45.098 "is_configured": true, 00:24:45.098 "data_offset": 2048, 00:24:45.098 "data_size": 63488 00:24:45.098 }, 00:24:45.098 { 00:24:45.098 "name": null, 00:24:45.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:45.098 "is_configured": false, 00:24:45.098 "data_offset": 2048, 00:24:45.098 "data_size": 63488 00:24:45.098 }, 00:24:45.098 { 00:24:45.098 "name": null, 00:24:45.098 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:45.098 "is_configured": false, 00:24:45.098 "data_offset": 2048, 00:24:45.098 "data_size": 63488 00:24:45.098 }, 00:24:45.098 { 00:24:45.098 "name": null, 00:24:45.098 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:45.098 "is_configured": false, 00:24:45.098 "data_offset": 2048, 00:24:45.098 "data_size": 63488 00:24:45.098 } 00:24:45.098 ] 00:24:45.098 }' 00:24:45.098 04:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:45.098 04:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.666 04:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:24:45.666 04:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:45.666 [2024-07-23 04:19:54.448748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:45.666 [2024-07-23 04:19:54.448815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:45.666 [2024-07-23 04:19:54.448840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043580 00:24:45.666 [2024-07-23 04:19:54.448858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:45.666 [2024-07-23 04:19:54.449416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:45.666 [2024-07-23 04:19:54.449444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:45.666 [2024-07-23 04:19:54.449537] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:45.666 [2024-07-23 04:19:54.449571] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:45.926 pt2 00:24:45.926 04:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:45.926 [2024-07-23 04:19:54.677395] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:45.926 04:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:24:45.926 04:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:45.926 04:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:45.926 04:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:45.926 04:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:45.926 04:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:45.926 04:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:45.926 04:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:45.926 04:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:45.926 04:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:45.926 04:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.926 04:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.185 04:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:46.185 "name": "raid_bdev1", 00:24:46.185 "uuid": "d257fd78-1f26-4eac-b905-132d01eec9cd", 00:24:46.185 "strip_size_kb": 64, 00:24:46.185 "state": "configuring", 00:24:46.185 "raid_level": "concat", 00:24:46.185 "superblock": true, 00:24:46.185 "num_base_bdevs": 4, 00:24:46.185 "num_base_bdevs_discovered": 1, 00:24:46.185 "num_base_bdevs_operational": 4, 00:24:46.185 "base_bdevs_list": [ 00:24:46.185 { 00:24:46.185 "name": "pt1", 00:24:46.185 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:46.185 "is_configured": true, 00:24:46.185 "data_offset": 2048, 00:24:46.185 "data_size": 63488 00:24:46.185 }, 00:24:46.185 { 00:24:46.185 "name": null, 00:24:46.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:46.185 "is_configured": false, 00:24:46.185 "data_offset": 2048, 00:24:46.185 "data_size": 63488 00:24:46.185 }, 00:24:46.185 { 00:24:46.185 "name": null, 00:24:46.185 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:46.185 "is_configured": false, 00:24:46.185 "data_offset": 2048, 00:24:46.185 "data_size": 63488 00:24:46.185 }, 00:24:46.185 { 00:24:46.185 "name": null, 00:24:46.185 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:46.185 "is_configured": false, 00:24:46.185 "data_offset": 2048, 00:24:46.185 "data_size": 63488 00:24:46.185 } 00:24:46.185 ] 00:24:46.185 }' 00:24:46.185 04:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:46.185 04:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:46.772 04:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:24:46.772 04:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:46.772 04:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:47.031 [2024-07-23 04:19:55.704096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:47.031 [2024-07-23 04:19:55.704164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:47.031 [2024-07-23 04:19:55.704190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043880 00:24:47.031 [2024-07-23 04:19:55.704205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:47.031 [2024-07-23 04:19:55.704749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:47.031 [2024-07-23 04:19:55.704774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:47.031 [2024-07-23 04:19:55.704874] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:47.031 [2024-07-23 04:19:55.704903] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:47.031 pt2 00:24:47.031 04:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:47.031 04:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:47.031 04:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:47.290 [2024-07-23 04:19:55.932973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:47.290 [2024-07-23 04:19:55.933031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:47.290 [2024-07-23 04:19:55.933063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043b80 00:24:47.290 [2024-07-23 04:19:55.933078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:47.290 [2024-07-23 04:19:55.933670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:47.290 [2024-07-23 04:19:55.933695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:47.290 [2024-07-23 04:19:55.933787] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:24:47.290 [2024-07-23 04:19:55.933814] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:47.290 pt3 00:24:47.290 04:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:47.290 04:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:47.290 04:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:47.552 [2024-07-23 04:19:56.157586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:47.552 [2024-07-23 04:19:56.157642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:47.552 [2024-07-23 04:19:56.157668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:24:47.552 [2024-07-23 04:19:56.157683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:47.552 [2024-07-23 04:19:56.158220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:47.552 [2024-07-23 04:19:56.158246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:47.552 [2024-07-23 04:19:56.158345] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:24:47.552 [2024-07-23 04:19:56.158374] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:47.552 [2024-07-23 04:19:56.158582] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:24:47.552 [2024-07-23 04:19:56.158597] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:47.552 [2024-07-23 04:19:56.158910] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:24:47.552 [2024-07-23 04:19:56.159125] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:24:47.552 [2024-07-23 04:19:56.159153] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:24:47.552 [2024-07-23 04:19:56.159333] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:47.553 pt4 00:24:47.553 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:47.553 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:47.553 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:47.553 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:47.553 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:47.553 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:47.553 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:47.553 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:47.553 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:47.553 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:47.553 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:47.553 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:47.553 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.553 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.812 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:47.812 "name": "raid_bdev1", 00:24:47.812 "uuid": "d257fd78-1f26-4eac-b905-132d01eec9cd", 00:24:47.812 "strip_size_kb": 64, 00:24:47.812 "state": "online", 00:24:47.812 "raid_level": "concat", 00:24:47.812 "superblock": true, 00:24:47.812 "num_base_bdevs": 4, 00:24:47.812 "num_base_bdevs_discovered": 4, 00:24:47.812 "num_base_bdevs_operational": 4, 00:24:47.812 "base_bdevs_list": [ 00:24:47.812 { 00:24:47.812 "name": "pt1", 00:24:47.812 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:47.812 "is_configured": true, 00:24:47.812 "data_offset": 2048, 00:24:47.812 "data_size": 63488 00:24:47.812 }, 00:24:47.812 { 00:24:47.812 "name": "pt2", 00:24:47.812 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:47.812 "is_configured": true, 00:24:47.812 "data_offset": 2048, 00:24:47.812 "data_size": 63488 00:24:47.812 }, 00:24:47.812 { 00:24:47.812 "name": "pt3", 00:24:47.812 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:47.812 "is_configured": true, 00:24:47.812 "data_offset": 2048, 00:24:47.812 "data_size": 63488 00:24:47.812 }, 00:24:47.812 { 00:24:47.812 "name": "pt4", 00:24:47.812 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:47.812 "is_configured": true, 00:24:47.812 "data_offset": 2048, 00:24:47.812 "data_size": 63488 00:24:47.812 } 00:24:47.812 ] 00:24:47.812 }' 00:24:47.812 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:47.812 04:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:48.378 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:24:48.378 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:48.378 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:48.378 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:48.378 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:48.378 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:48.378 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:48.378 04:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:48.378 [2024-07-23 04:19:57.140614] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:48.637 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:48.637 "name": "raid_bdev1", 00:24:48.637 "aliases": [ 00:24:48.637 "d257fd78-1f26-4eac-b905-132d01eec9cd" 00:24:48.637 ], 00:24:48.637 "product_name": "Raid Volume", 00:24:48.637 "block_size": 512, 00:24:48.637 "num_blocks": 253952, 00:24:48.637 "uuid": "d257fd78-1f26-4eac-b905-132d01eec9cd", 00:24:48.637 "assigned_rate_limits": { 00:24:48.637 "rw_ios_per_sec": 0, 00:24:48.637 "rw_mbytes_per_sec": 0, 00:24:48.637 "r_mbytes_per_sec": 0, 00:24:48.637 "w_mbytes_per_sec": 0 00:24:48.637 }, 00:24:48.637 "claimed": false, 00:24:48.637 "zoned": false, 00:24:48.637 "supported_io_types": { 00:24:48.637 "read": true, 00:24:48.637 "write": true, 00:24:48.637 "unmap": true, 00:24:48.637 "flush": true, 00:24:48.637 "reset": true, 00:24:48.637 "nvme_admin": false, 00:24:48.637 "nvme_io": false, 00:24:48.637 "nvme_io_md": false, 00:24:48.637 "write_zeroes": true, 00:24:48.637 "zcopy": false, 00:24:48.637 "get_zone_info": false, 00:24:48.637 "zone_management": false, 00:24:48.637 "zone_append": false, 00:24:48.637 "compare": false, 00:24:48.637 "compare_and_write": false, 00:24:48.637 "abort": false, 00:24:48.637 "seek_hole": false, 00:24:48.637 "seek_data": false, 00:24:48.637 "copy": false, 00:24:48.637 "nvme_iov_md": false 00:24:48.637 }, 00:24:48.637 "memory_domains": [ 00:24:48.637 { 00:24:48.637 "dma_device_id": "system", 00:24:48.637 "dma_device_type": 1 00:24:48.637 }, 00:24:48.637 { 00:24:48.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:48.637 "dma_device_type": 2 00:24:48.637 }, 00:24:48.637 { 00:24:48.637 "dma_device_id": "system", 00:24:48.637 "dma_device_type": 1 00:24:48.637 }, 00:24:48.637 { 00:24:48.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:48.637 "dma_device_type": 2 00:24:48.637 }, 00:24:48.637 { 00:24:48.637 "dma_device_id": "system", 00:24:48.637 "dma_device_type": 1 00:24:48.637 }, 00:24:48.637 { 00:24:48.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:48.637 "dma_device_type": 2 00:24:48.637 }, 00:24:48.637 { 00:24:48.637 "dma_device_id": "system", 00:24:48.637 "dma_device_type": 1 00:24:48.637 }, 00:24:48.637 { 00:24:48.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:48.637 "dma_device_type": 2 00:24:48.637 } 00:24:48.637 ], 00:24:48.637 "driver_specific": { 00:24:48.637 "raid": { 00:24:48.637 "uuid": "d257fd78-1f26-4eac-b905-132d01eec9cd", 00:24:48.637 "strip_size_kb": 64, 00:24:48.637 "state": "online", 00:24:48.637 "raid_level": "concat", 00:24:48.637 "superblock": true, 00:24:48.637 "num_base_bdevs": 4, 00:24:48.637 "num_base_bdevs_discovered": 4, 00:24:48.637 "num_base_bdevs_operational": 4, 00:24:48.637 "base_bdevs_list": [ 00:24:48.637 { 00:24:48.637 "name": "pt1", 00:24:48.637 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:48.637 "is_configured": true, 00:24:48.637 "data_offset": 2048, 00:24:48.637 "data_size": 63488 00:24:48.637 }, 00:24:48.637 { 00:24:48.637 "name": "pt2", 00:24:48.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:48.637 "is_configured": true, 00:24:48.637 "data_offset": 2048, 00:24:48.637 "data_size": 63488 00:24:48.637 }, 00:24:48.637 { 00:24:48.637 "name": "pt3", 00:24:48.637 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:48.637 "is_configured": true, 00:24:48.637 "data_offset": 2048, 00:24:48.637 "data_size": 63488 00:24:48.637 }, 00:24:48.637 { 00:24:48.637 "name": "pt4", 00:24:48.637 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:48.637 "is_configured": true, 00:24:48.637 "data_offset": 2048, 00:24:48.637 "data_size": 63488 00:24:48.637 } 00:24:48.637 ] 00:24:48.637 } 00:24:48.637 } 00:24:48.637 }' 00:24:48.637 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:48.638 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:48.638 pt2 00:24:48.638 pt3 00:24:48.638 pt4' 00:24:48.638 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:48.638 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:48.638 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:48.896 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:48.896 "name": "pt1", 00:24:48.896 "aliases": [ 00:24:48.896 "00000000-0000-0000-0000-000000000001" 00:24:48.896 ], 00:24:48.896 "product_name": "passthru", 00:24:48.896 "block_size": 512, 00:24:48.896 "num_blocks": 65536, 00:24:48.896 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:48.896 "assigned_rate_limits": { 00:24:48.896 "rw_ios_per_sec": 0, 00:24:48.896 "rw_mbytes_per_sec": 0, 00:24:48.896 "r_mbytes_per_sec": 0, 00:24:48.896 "w_mbytes_per_sec": 0 00:24:48.896 }, 00:24:48.896 "claimed": true, 00:24:48.896 "claim_type": "exclusive_write", 00:24:48.896 "zoned": false, 00:24:48.896 "supported_io_types": { 00:24:48.896 "read": true, 00:24:48.896 "write": true, 00:24:48.896 "unmap": true, 00:24:48.896 "flush": true, 00:24:48.896 "reset": true, 00:24:48.896 "nvme_admin": false, 00:24:48.897 "nvme_io": false, 00:24:48.897 "nvme_io_md": false, 00:24:48.897 "write_zeroes": true, 00:24:48.897 "zcopy": true, 00:24:48.897 "get_zone_info": false, 00:24:48.897 "zone_management": false, 00:24:48.897 "zone_append": false, 00:24:48.897 "compare": false, 00:24:48.897 "compare_and_write": false, 00:24:48.897 "abort": true, 00:24:48.897 "seek_hole": false, 00:24:48.897 "seek_data": false, 00:24:48.897 "copy": true, 00:24:48.897 "nvme_iov_md": false 00:24:48.897 }, 00:24:48.897 "memory_domains": [ 00:24:48.897 { 00:24:48.897 "dma_device_id": "system", 00:24:48.897 "dma_device_type": 1 00:24:48.897 }, 00:24:48.897 { 00:24:48.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:48.897 "dma_device_type": 2 00:24:48.897 } 00:24:48.897 ], 00:24:48.897 "driver_specific": { 00:24:48.897 "passthru": { 00:24:48.897 "name": "pt1", 00:24:48.897 "base_bdev_name": "malloc1" 00:24:48.897 } 00:24:48.897 } 00:24:48.897 }' 00:24:48.897 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:48.897 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:48.897 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:48.897 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:48.897 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:48.897 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:48.897 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:48.897 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:49.156 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:49.156 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:49.156 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:49.156 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:49.156 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:49.156 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:49.156 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:49.415 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:49.415 "name": "pt2", 00:24:49.415 "aliases": [ 00:24:49.415 "00000000-0000-0000-0000-000000000002" 00:24:49.415 ], 00:24:49.415 "product_name": "passthru", 00:24:49.415 "block_size": 512, 00:24:49.415 "num_blocks": 65536, 00:24:49.415 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:49.415 "assigned_rate_limits": { 00:24:49.415 "rw_ios_per_sec": 0, 00:24:49.415 "rw_mbytes_per_sec": 0, 00:24:49.415 "r_mbytes_per_sec": 0, 00:24:49.415 "w_mbytes_per_sec": 0 00:24:49.415 }, 00:24:49.415 "claimed": true, 00:24:49.415 "claim_type": "exclusive_write", 00:24:49.415 "zoned": false, 00:24:49.415 "supported_io_types": { 00:24:49.415 "read": true, 00:24:49.415 "write": true, 00:24:49.415 "unmap": true, 00:24:49.415 "flush": true, 00:24:49.415 "reset": true, 00:24:49.415 "nvme_admin": false, 00:24:49.415 "nvme_io": false, 00:24:49.415 "nvme_io_md": false, 00:24:49.415 "write_zeroes": true, 00:24:49.415 "zcopy": true, 00:24:49.415 "get_zone_info": false, 00:24:49.415 "zone_management": false, 00:24:49.415 "zone_append": false, 00:24:49.415 "compare": false, 00:24:49.415 "compare_and_write": false, 00:24:49.415 "abort": true, 00:24:49.415 "seek_hole": false, 00:24:49.415 "seek_data": false, 00:24:49.415 "copy": true, 00:24:49.415 "nvme_iov_md": false 00:24:49.415 }, 00:24:49.415 "memory_domains": [ 00:24:49.415 { 00:24:49.415 "dma_device_id": "system", 00:24:49.415 "dma_device_type": 1 00:24:49.415 }, 00:24:49.415 { 00:24:49.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:49.415 "dma_device_type": 2 00:24:49.415 } 00:24:49.415 ], 00:24:49.415 "driver_specific": { 00:24:49.415 "passthru": { 00:24:49.415 "name": "pt2", 00:24:49.415 "base_bdev_name": "malloc2" 00:24:49.415 } 00:24:49.415 } 00:24:49.415 }' 00:24:49.415 04:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:49.415 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:49.415 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:49.415 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:49.415 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:49.415 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:49.415 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:49.674 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:49.674 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:49.674 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:49.674 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:49.674 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:49.674 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:49.674 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:49.674 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:49.934 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:49.934 "name": "pt3", 00:24:49.934 "aliases": [ 00:24:49.934 "00000000-0000-0000-0000-000000000003" 00:24:49.934 ], 00:24:49.934 "product_name": "passthru", 00:24:49.934 "block_size": 512, 00:24:49.934 "num_blocks": 65536, 00:24:49.934 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:49.934 "assigned_rate_limits": { 00:24:49.934 "rw_ios_per_sec": 0, 00:24:49.934 "rw_mbytes_per_sec": 0, 00:24:49.934 "r_mbytes_per_sec": 0, 00:24:49.934 "w_mbytes_per_sec": 0 00:24:49.934 }, 00:24:49.934 "claimed": true, 00:24:49.934 "claim_type": "exclusive_write", 00:24:49.934 "zoned": false, 00:24:49.934 "supported_io_types": { 00:24:49.934 "read": true, 00:24:49.934 "write": true, 00:24:49.934 "unmap": true, 00:24:49.934 "flush": true, 00:24:49.934 "reset": true, 00:24:49.934 "nvme_admin": false, 00:24:49.934 "nvme_io": false, 00:24:49.934 "nvme_io_md": false, 00:24:49.934 "write_zeroes": true, 00:24:49.934 "zcopy": true, 00:24:49.934 "get_zone_info": false, 00:24:49.934 "zone_management": false, 00:24:49.934 "zone_append": false, 00:24:49.934 "compare": false, 00:24:49.934 "compare_and_write": false, 00:24:49.934 "abort": true, 00:24:49.934 "seek_hole": false, 00:24:49.934 "seek_data": false, 00:24:49.934 "copy": true, 00:24:49.934 "nvme_iov_md": false 00:24:49.934 }, 00:24:49.934 "memory_domains": [ 00:24:49.934 { 00:24:49.934 "dma_device_id": "system", 00:24:49.934 "dma_device_type": 1 00:24:49.934 }, 00:24:49.934 { 00:24:49.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:49.934 "dma_device_type": 2 00:24:49.934 } 00:24:49.934 ], 00:24:49.934 "driver_specific": { 00:24:49.934 "passthru": { 00:24:49.934 "name": "pt3", 00:24:49.934 "base_bdev_name": "malloc3" 00:24:49.934 } 00:24:49.934 } 00:24:49.934 }' 00:24:49.934 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:49.934 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:49.934 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:49.934 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:49.934 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:50.193 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:50.193 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:50.193 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:50.193 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:50.193 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:50.193 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:50.193 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:50.193 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:50.193 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:50.193 04:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:50.451 04:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:50.451 "name": "pt4", 00:24:50.451 "aliases": [ 00:24:50.451 "00000000-0000-0000-0000-000000000004" 00:24:50.451 ], 00:24:50.451 "product_name": "passthru", 00:24:50.451 "block_size": 512, 00:24:50.451 "num_blocks": 65536, 00:24:50.451 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:50.451 "assigned_rate_limits": { 00:24:50.451 "rw_ios_per_sec": 0, 00:24:50.451 "rw_mbytes_per_sec": 0, 00:24:50.451 "r_mbytes_per_sec": 0, 00:24:50.451 "w_mbytes_per_sec": 0 00:24:50.451 }, 00:24:50.451 "claimed": true, 00:24:50.451 "claim_type": "exclusive_write", 00:24:50.451 "zoned": false, 00:24:50.451 "supported_io_types": { 00:24:50.451 "read": true, 00:24:50.451 "write": true, 00:24:50.451 "unmap": true, 00:24:50.451 "flush": true, 00:24:50.451 "reset": true, 00:24:50.451 "nvme_admin": false, 00:24:50.451 "nvme_io": false, 00:24:50.451 "nvme_io_md": false, 00:24:50.451 "write_zeroes": true, 00:24:50.451 "zcopy": true, 00:24:50.451 "get_zone_info": false, 00:24:50.451 "zone_management": false, 00:24:50.451 "zone_append": false, 00:24:50.451 "compare": false, 00:24:50.451 "compare_and_write": false, 00:24:50.451 "abort": true, 00:24:50.451 "seek_hole": false, 00:24:50.451 "seek_data": false, 00:24:50.451 "copy": true, 00:24:50.451 "nvme_iov_md": false 00:24:50.451 }, 00:24:50.451 "memory_domains": [ 00:24:50.451 { 00:24:50.451 "dma_device_id": "system", 00:24:50.451 "dma_device_type": 1 00:24:50.451 }, 00:24:50.451 { 00:24:50.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:50.451 "dma_device_type": 2 00:24:50.451 } 00:24:50.451 ], 00:24:50.451 "driver_specific": { 00:24:50.451 "passthru": { 00:24:50.451 "name": "pt4", 00:24:50.451 "base_bdev_name": "malloc4" 00:24:50.451 } 00:24:50.451 } 00:24:50.451 }' 00:24:50.451 04:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:50.451 04:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:50.452 04:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:50.452 04:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:50.710 04:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:50.710 04:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:50.710 04:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:50.710 04:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:50.710 04:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:50.710 04:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:50.710 04:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:50.710 04:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:50.710 04:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:50.710 04:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:24:50.969 [2024-07-23 04:19:59.679515] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:50.969 04:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' d257fd78-1f26-4eac-b905-132d01eec9cd '!=' d257fd78-1f26-4eac-b905-132d01eec9cd ']' 00:24:50.969 04:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:24:50.969 04:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:50.969 04:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:50.969 04:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2731534 00:24:50.969 04:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2731534 ']' 00:24:50.969 04:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2731534 00:24:50.969 04:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:24:50.969 04:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:50.969 04:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2731534 00:24:51.228 04:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:51.228 04:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:51.228 04:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2731534' 00:24:51.228 killing process with pid 2731534 00:24:51.228 04:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2731534 00:24:51.228 [2024-07-23 04:19:59.757541] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:51.228 [2024-07-23 04:19:59.757638] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:51.228 04:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2731534 00:24:51.228 [2024-07-23 04:19:59.757721] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:51.228 [2024-07-23 04:19:59.757737] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:24:51.488 [2024-07-23 04:20:00.185427] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:53.392 04:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:24:53.392 00:24:53.392 real 0m17.612s 00:24:53.392 user 0m29.782s 00:24:53.392 sys 0m3.020s 00:24:53.392 04:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:53.392 04:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.392 ************************************ 00:24:53.392 END TEST raid_superblock_test 00:24:53.392 ************************************ 00:24:53.392 04:20:01 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:53.392 04:20:01 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:24:53.392 04:20:01 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:24:53.392 04:20:01 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:53.392 04:20:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:53.392 ************************************ 00:24:53.392 START TEST raid_read_error_test 00:24:53.392 ************************************ 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 read 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.L5l42BPnN2 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2734773 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2734773 /var/tmp/spdk-raid.sock 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2734773 ']' 00:24:53.392 04:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:53.393 04:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:53.393 04:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:53.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:53.393 04:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:53.393 04:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.393 [2024-07-23 04:20:01.992683] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:53.393 [2024-07-23 04:20:01.992806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2734773 ] 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:53.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.393 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:53.652 [2024-07-23 04:20:02.220822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.910 [2024-07-23 04:20:02.504633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.168 [2024-07-23 04:20:02.858463] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:54.168 [2024-07-23 04:20:02.858498] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:54.426 04:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:54.426 04:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:24:54.426 04:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:54.426 04:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:54.685 BaseBdev1_malloc 00:24:54.685 04:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:24:54.943 true 00:24:54.943 04:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:55.201 [2024-07-23 04:20:03.752246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:55.201 [2024-07-23 04:20:03.752309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:55.201 [2024-07-23 04:20:03.752337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:24:55.201 [2024-07-23 04:20:03.752359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:55.201 [2024-07-23 04:20:03.755112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:55.201 [2024-07-23 04:20:03.755159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:55.201 BaseBdev1 00:24:55.201 04:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:55.201 04:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:55.459 BaseBdev2_malloc 00:24:55.460 04:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:24:55.718 true 00:24:55.718 04:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:55.718 [2024-07-23 04:20:04.492653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:55.718 [2024-07-23 04:20:04.492715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:55.718 [2024-07-23 04:20:04.492742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:24:55.718 [2024-07-23 04:20:04.492763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:55.718 [2024-07-23 04:20:04.495552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:55.718 [2024-07-23 04:20:04.495593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:55.718 BaseBdev2 00:24:55.976 04:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:55.976 04:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:56.234 BaseBdev3_malloc 00:24:56.234 04:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:24:56.234 true 00:24:56.492 04:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:56.492 [2024-07-23 04:20:05.221505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:56.492 [2024-07-23 04:20:05.221562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:56.492 [2024-07-23 04:20:05.221590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:24:56.492 [2024-07-23 04:20:05.221607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:56.492 [2024-07-23 04:20:05.224404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:56.492 [2024-07-23 04:20:05.224447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:56.492 BaseBdev3 00:24:56.492 04:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:56.492 04:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:56.750 BaseBdev4_malloc 00:24:56.750 04:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:24:57.007 true 00:24:57.007 04:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:57.263 [2024-07-23 04:20:05.941956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:57.263 [2024-07-23 04:20:05.942017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:57.264 [2024-07-23 04:20:05.942046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:24:57.264 [2024-07-23 04:20:05.942063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:57.264 [2024-07-23 04:20:05.944844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:57.264 [2024-07-23 04:20:05.944882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:57.264 BaseBdev4 00:24:57.264 04:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:24:57.521 [2024-07-23 04:20:06.154583] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:57.521 [2024-07-23 04:20:06.156925] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:57.521 [2024-07-23 04:20:06.157021] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:57.521 [2024-07-23 04:20:06.157102] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:57.521 [2024-07-23 04:20:06.157401] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042c80 00:24:57.521 [2024-07-23 04:20:06.157422] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:57.521 [2024-07-23 04:20:06.157769] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:24:57.521 [2024-07-23 04:20:06.158038] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042c80 00:24:57.521 [2024-07-23 04:20:06.158053] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042c80 00:24:57.521 [2024-07-23 04:20:06.158258] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:57.522 04:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:57.522 04:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:57.522 04:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:57.522 04:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:57.522 04:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:57.522 04:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:57.522 04:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:57.522 04:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:57.522 04:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:57.522 04:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:57.522 04:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.522 04:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.780 04:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:57.780 "name": "raid_bdev1", 00:24:57.780 "uuid": "170d6804-2185-43e6-be13-4702b4798d87", 00:24:57.780 "strip_size_kb": 64, 00:24:57.780 "state": "online", 00:24:57.780 "raid_level": "concat", 00:24:57.780 "superblock": true, 00:24:57.780 "num_base_bdevs": 4, 00:24:57.780 "num_base_bdevs_discovered": 4, 00:24:57.780 "num_base_bdevs_operational": 4, 00:24:57.780 "base_bdevs_list": [ 00:24:57.780 { 00:24:57.780 "name": "BaseBdev1", 00:24:57.780 "uuid": "4d1e8b9a-08f1-5a9e-a1ca-64bc641b20ec", 00:24:57.780 "is_configured": true, 00:24:57.780 "data_offset": 2048, 00:24:57.780 "data_size": 63488 00:24:57.780 }, 00:24:57.780 { 00:24:57.780 "name": "BaseBdev2", 00:24:57.780 "uuid": "4f760b8a-38c7-5cf7-9dc0-286999f0a2fd", 00:24:57.780 "is_configured": true, 00:24:57.780 "data_offset": 2048, 00:24:57.780 "data_size": 63488 00:24:57.780 }, 00:24:57.780 { 00:24:57.780 "name": "BaseBdev3", 00:24:57.780 "uuid": "9850ce48-1c4f-5d0b-9e67-f8d65c1c5e04", 00:24:57.780 "is_configured": true, 00:24:57.780 "data_offset": 2048, 00:24:57.780 "data_size": 63488 00:24:57.780 }, 00:24:57.780 { 00:24:57.780 "name": "BaseBdev4", 00:24:57.780 "uuid": "a28da518-97fb-547b-9322-1287b238d9d3", 00:24:57.780 "is_configured": true, 00:24:57.780 "data_offset": 2048, 00:24:57.780 "data_size": 63488 00:24:57.780 } 00:24:57.780 ] 00:24:57.780 }' 00:24:57.780 04:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:57.780 04:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.346 04:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:24:58.346 04:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:58.346 [2024-07-23 04:20:07.066768] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:24:59.280 04:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:24:59.538 04:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:24:59.538 04:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:24:59.538 04:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:24:59.538 04:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:59.538 04:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:59.538 04:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:59.538 04:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:59.538 04:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:59.538 04:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:59.538 04:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:59.538 04:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:59.538 04:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:59.538 04:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:59.538 04:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.538 04:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.795 04:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:59.795 "name": "raid_bdev1", 00:24:59.795 "uuid": "170d6804-2185-43e6-be13-4702b4798d87", 00:24:59.795 "strip_size_kb": 64, 00:24:59.795 "state": "online", 00:24:59.795 "raid_level": "concat", 00:24:59.795 "superblock": true, 00:24:59.795 "num_base_bdevs": 4, 00:24:59.795 "num_base_bdevs_discovered": 4, 00:24:59.795 "num_base_bdevs_operational": 4, 00:24:59.795 "base_bdevs_list": [ 00:24:59.795 { 00:24:59.795 "name": "BaseBdev1", 00:24:59.795 "uuid": "4d1e8b9a-08f1-5a9e-a1ca-64bc641b20ec", 00:24:59.795 "is_configured": true, 00:24:59.795 "data_offset": 2048, 00:24:59.795 "data_size": 63488 00:24:59.795 }, 00:24:59.795 { 00:24:59.795 "name": "BaseBdev2", 00:24:59.795 "uuid": "4f760b8a-38c7-5cf7-9dc0-286999f0a2fd", 00:24:59.795 "is_configured": true, 00:24:59.795 "data_offset": 2048, 00:24:59.795 "data_size": 63488 00:24:59.795 }, 00:24:59.795 { 00:24:59.795 "name": "BaseBdev3", 00:24:59.795 "uuid": "9850ce48-1c4f-5d0b-9e67-f8d65c1c5e04", 00:24:59.795 "is_configured": true, 00:24:59.795 "data_offset": 2048, 00:24:59.795 "data_size": 63488 00:24:59.795 }, 00:24:59.795 { 00:24:59.795 "name": "BaseBdev4", 00:24:59.795 "uuid": "a28da518-97fb-547b-9322-1287b238d9d3", 00:24:59.795 "is_configured": true, 00:24:59.795 "data_offset": 2048, 00:24:59.795 "data_size": 63488 00:24:59.795 } 00:24:59.795 ] 00:24:59.795 }' 00:24:59.795 04:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:59.795 04:20:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.360 04:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:00.619 [2024-07-23 04:20:09.214473] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:00.619 [2024-07-23 04:20:09.214513] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:00.619 [2024-07-23 04:20:09.217774] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:00.619 [2024-07-23 04:20:09.217834] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:00.619 [2024-07-23 04:20:09.217887] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:00.619 [2024-07-23 04:20:09.217915] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042c80 name raid_bdev1, state offline 00:25:00.619 0 00:25:00.619 04:20:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2734773 00:25:00.619 04:20:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2734773 ']' 00:25:00.619 04:20:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2734773 00:25:00.619 04:20:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:25:00.619 04:20:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:00.619 04:20:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2734773 00:25:00.619 04:20:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:00.619 04:20:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:00.619 04:20:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2734773' 00:25:00.619 killing process with pid 2734773 00:25:00.619 04:20:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2734773 00:25:00.619 [2024-07-23 04:20:09.290608] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:00.619 04:20:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2734773 00:25:00.877 [2024-07-23 04:20:09.645306] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:02.778 04:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.L5l42BPnN2 00:25:02.778 04:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:25:02.778 04:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:25:02.778 04:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:25:02.778 04:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:25:02.778 04:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:02.778 04:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:02.778 04:20:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:25:02.778 00:25:02.778 real 0m9.519s 00:25:02.778 user 0m13.646s 00:25:02.778 sys 0m1.428s 00:25:02.778 04:20:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:02.778 04:20:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.778 ************************************ 00:25:02.778 END TEST raid_read_error_test 00:25:02.778 ************************************ 00:25:02.778 04:20:11 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:02.778 04:20:11 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:25:02.778 04:20:11 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:02.778 04:20:11 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:02.778 04:20:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:02.778 ************************************ 00:25:02.778 START TEST raid_write_error_test 00:25:02.778 ************************************ 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 write 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.6Znc5iAQYd 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2736455 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2736455 /var/tmp/spdk-raid.sock 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2736455 ']' 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:02.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.778 04:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:03.037 [2024-07-23 04:20:11.590243] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:03.037 [2024-07-23 04:20:11.590365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2736455 ] 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:03.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.037 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:03.037 [2024-07-23 04:20:11.815835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.602 [2024-07-23 04:20:12.099285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.859 [2024-07-23 04:20:12.453434] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:03.859 [2024-07-23 04:20:12.453470] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:03.859 04:20:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:03.859 04:20:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:25:03.860 04:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:03.860 04:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:04.117 BaseBdev1_malloc 00:25:04.117 04:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:25:04.374 true 00:25:04.374 04:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:04.631 [2024-07-23 04:20:13.265033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:04.631 [2024-07-23 04:20:13.265090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:04.631 [2024-07-23 04:20:13.265117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:25:04.631 [2024-07-23 04:20:13.265147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:04.631 [2024-07-23 04:20:13.267901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:04.632 [2024-07-23 04:20:13.267940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:04.632 BaseBdev1 00:25:04.632 04:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:04.632 04:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:04.889 BaseBdev2_malloc 00:25:04.889 04:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:25:04.889 true 00:25:04.889 04:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:05.146 [2024-07-23 04:20:13.789391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:05.146 [2024-07-23 04:20:13.789445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:05.146 [2024-07-23 04:20:13.789470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:25:05.146 [2024-07-23 04:20:13.789491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:05.146 [2024-07-23 04:20:13.792247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:05.146 [2024-07-23 04:20:13.792283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:05.146 BaseBdev2 00:25:05.146 04:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:05.146 04:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:05.403 BaseBdev3_malloc 00:25:05.403 04:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:25:05.662 true 00:25:05.662 04:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:05.662 [2024-07-23 04:20:14.374401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:05.662 [2024-07-23 04:20:14.374458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:05.662 [2024-07-23 04:20:14.374486] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:25:05.662 [2024-07-23 04:20:14.374503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:05.662 [2024-07-23 04:20:14.377309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:05.662 [2024-07-23 04:20:14.377346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:05.662 BaseBdev3 00:25:05.662 04:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:05.662 04:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:05.919 BaseBdev4_malloc 00:25:05.919 04:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:25:06.177 true 00:25:06.177 04:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:25:06.434 [2024-07-23 04:20:15.026842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:25:06.434 [2024-07-23 04:20:15.026901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:06.434 [2024-07-23 04:20:15.026927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:25:06.434 [2024-07-23 04:20:15.026944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:06.434 [2024-07-23 04:20:15.029671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:06.434 [2024-07-23 04:20:15.029706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:06.434 BaseBdev4 00:25:06.434 04:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:25:06.692 [2024-07-23 04:20:15.243484] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:06.692 [2024-07-23 04:20:15.245834] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:06.692 [2024-07-23 04:20:15.245927] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:06.692 [2024-07-23 04:20:15.246007] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:06.692 [2024-07-23 04:20:15.246312] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042c80 00:25:06.692 [2024-07-23 04:20:15.246359] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:06.692 [2024-07-23 04:20:15.246704] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:25:06.692 [2024-07-23 04:20:15.246976] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042c80 00:25:06.692 [2024-07-23 04:20:15.246991] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042c80 00:25:06.692 [2024-07-23 04:20:15.247194] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:06.692 04:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:06.692 04:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:06.692 04:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:06.692 04:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:06.692 04:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:06.692 04:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:06.692 04:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:06.692 04:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:06.692 04:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:06.692 04:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:06.692 04:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.692 04:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.950 04:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:06.950 "name": "raid_bdev1", 00:25:06.950 "uuid": "32eceb00-e171-4edb-a333-62db87427bc3", 00:25:06.950 "strip_size_kb": 64, 00:25:06.950 "state": "online", 00:25:06.950 "raid_level": "concat", 00:25:06.950 "superblock": true, 00:25:06.950 "num_base_bdevs": 4, 00:25:06.950 "num_base_bdevs_discovered": 4, 00:25:06.951 "num_base_bdevs_operational": 4, 00:25:06.951 "base_bdevs_list": [ 00:25:06.951 { 00:25:06.951 "name": "BaseBdev1", 00:25:06.951 "uuid": "59af152c-8a40-5fa6-8942-5982ca0a11ee", 00:25:06.951 "is_configured": true, 00:25:06.951 "data_offset": 2048, 00:25:06.951 "data_size": 63488 00:25:06.951 }, 00:25:06.951 { 00:25:06.951 "name": "BaseBdev2", 00:25:06.951 "uuid": "b123421a-64f1-550d-a916-32e7177d759d", 00:25:06.951 "is_configured": true, 00:25:06.951 "data_offset": 2048, 00:25:06.951 "data_size": 63488 00:25:06.951 }, 00:25:06.951 { 00:25:06.951 "name": "BaseBdev3", 00:25:06.951 "uuid": "f915050a-0cdd-57d1-9aba-66bc6752321b", 00:25:06.951 "is_configured": true, 00:25:06.951 "data_offset": 2048, 00:25:06.951 "data_size": 63488 00:25:06.951 }, 00:25:06.951 { 00:25:06.951 "name": "BaseBdev4", 00:25:06.951 "uuid": "3fb82ebf-be8a-53a5-842b-13dbf115ca02", 00:25:06.951 "is_configured": true, 00:25:06.951 "data_offset": 2048, 00:25:06.951 "data_size": 63488 00:25:06.951 } 00:25:06.951 ] 00:25:06.951 }' 00:25:06.951 04:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:06.951 04:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.516 04:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:25:07.516 04:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:07.516 [2024-07-23 04:20:16.164070] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:25:08.448 04:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:25:08.706 04:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:25:08.706 04:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:25:08.706 04:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:25:08.706 04:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:08.706 04:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:08.706 04:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:08.706 04:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:08.706 04:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:08.706 04:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:08.706 04:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:08.706 04:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:08.706 04:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:08.706 04:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:08.706 04:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:08.706 04:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.271 04:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:09.271 "name": "raid_bdev1", 00:25:09.271 "uuid": "32eceb00-e171-4edb-a333-62db87427bc3", 00:25:09.271 "strip_size_kb": 64, 00:25:09.271 "state": "online", 00:25:09.271 "raid_level": "concat", 00:25:09.271 "superblock": true, 00:25:09.271 "num_base_bdevs": 4, 00:25:09.271 "num_base_bdevs_discovered": 4, 00:25:09.271 "num_base_bdevs_operational": 4, 00:25:09.271 "base_bdevs_list": [ 00:25:09.271 { 00:25:09.271 "name": "BaseBdev1", 00:25:09.271 "uuid": "59af152c-8a40-5fa6-8942-5982ca0a11ee", 00:25:09.271 "is_configured": true, 00:25:09.271 "data_offset": 2048, 00:25:09.271 "data_size": 63488 00:25:09.271 }, 00:25:09.271 { 00:25:09.271 "name": "BaseBdev2", 00:25:09.271 "uuid": "b123421a-64f1-550d-a916-32e7177d759d", 00:25:09.271 "is_configured": true, 00:25:09.271 "data_offset": 2048, 00:25:09.272 "data_size": 63488 00:25:09.272 }, 00:25:09.272 { 00:25:09.272 "name": "BaseBdev3", 00:25:09.272 "uuid": "f915050a-0cdd-57d1-9aba-66bc6752321b", 00:25:09.272 "is_configured": true, 00:25:09.272 "data_offset": 2048, 00:25:09.272 "data_size": 63488 00:25:09.272 }, 00:25:09.272 { 00:25:09.272 "name": "BaseBdev4", 00:25:09.272 "uuid": "3fb82ebf-be8a-53a5-842b-13dbf115ca02", 00:25:09.272 "is_configured": true, 00:25:09.272 "data_offset": 2048, 00:25:09.272 "data_size": 63488 00:25:09.272 } 00:25:09.272 ] 00:25:09.272 }' 00:25:09.272 04:20:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:09.272 04:20:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.838 04:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:09.838 [2024-07-23 04:20:18.591709] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:09.838 [2024-07-23 04:20:18.591765] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:09.838 [2024-07-23 04:20:18.595011] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:09.838 [2024-07-23 04:20:18.595073] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:09.838 [2024-07-23 04:20:18.595127] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:09.838 [2024-07-23 04:20:18.595165] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042c80 name raid_bdev1, state offline 00:25:09.838 0 00:25:09.838 04:20:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2736455 00:25:09.838 04:20:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2736455 ']' 00:25:09.838 04:20:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2736455 00:25:09.838 04:20:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:25:09.838 04:20:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:10.096 04:20:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2736455 00:25:10.096 04:20:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:10.096 04:20:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:10.096 04:20:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2736455' 00:25:10.096 killing process with pid 2736455 00:25:10.096 04:20:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2736455 00:25:10.096 [2024-07-23 04:20:18.670091] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:10.096 04:20:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2736455 00:25:10.354 [2024-07-23 04:20:19.028767] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:12.261 04:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:25:12.261 04:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.6Znc5iAQYd 00:25:12.261 04:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:25:12.261 04:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.41 00:25:12.261 04:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:25:12.261 04:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:12.261 04:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:12.261 04:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.41 != \0\.\0\0 ]] 00:25:12.261 00:25:12.261 real 0m9.324s 00:25:12.261 user 0m13.273s 00:25:12.261 sys 0m1.366s 00:25:12.261 04:20:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:12.261 04:20:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.261 ************************************ 00:25:12.261 END TEST raid_write_error_test 00:25:12.261 ************************************ 00:25:12.261 04:20:20 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:12.261 04:20:20 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:25:12.261 04:20:20 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:25:12.261 04:20:20 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:12.261 04:20:20 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:12.261 04:20:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:12.261 ************************************ 00:25:12.261 START TEST raid_state_function_test 00:25:12.261 ************************************ 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 false 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2738133 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2738133' 00:25:12.261 Process raid pid: 2738133 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2738133 /var/tmp/spdk-raid.sock 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2738133 ']' 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:12.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:12.261 04:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.261 [2024-07-23 04:20:20.998750] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:12.261 [2024-07-23 04:20:20.998869] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:12.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:12.520 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:12.520 [2024-07-23 04:20:21.227991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.779 [2024-07-23 04:20:21.513048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.345 [2024-07-23 04:20:21.869832] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:13.345 [2024-07-23 04:20:21.869872] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:13.345 04:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:13.345 04:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:25:13.345 04:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:13.603 [2024-07-23 04:20:22.269967] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:13.603 [2024-07-23 04:20:22.270025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:13.603 [2024-07-23 04:20:22.270040] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:13.603 [2024-07-23 04:20:22.270057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:13.603 [2024-07-23 04:20:22.270068] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:13.603 [2024-07-23 04:20:22.270084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:13.603 [2024-07-23 04:20:22.270095] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:13.603 [2024-07-23 04:20:22.270113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:13.603 04:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:13.603 04:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:13.603 04:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:13.603 04:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:13.603 04:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:13.603 04:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:13.603 04:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:13.603 04:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:13.603 04:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:13.603 04:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:13.603 04:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.603 04:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:13.861 04:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:13.861 "name": "Existed_Raid", 00:25:13.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.861 "strip_size_kb": 0, 00:25:13.861 "state": "configuring", 00:25:13.861 "raid_level": "raid1", 00:25:13.861 "superblock": false, 00:25:13.861 "num_base_bdevs": 4, 00:25:13.861 "num_base_bdevs_discovered": 0, 00:25:13.861 "num_base_bdevs_operational": 4, 00:25:13.861 "base_bdevs_list": [ 00:25:13.861 { 00:25:13.861 "name": "BaseBdev1", 00:25:13.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.861 "is_configured": false, 00:25:13.861 "data_offset": 0, 00:25:13.861 "data_size": 0 00:25:13.861 }, 00:25:13.861 { 00:25:13.861 "name": "BaseBdev2", 00:25:13.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.861 "is_configured": false, 00:25:13.861 "data_offset": 0, 00:25:13.861 "data_size": 0 00:25:13.861 }, 00:25:13.861 { 00:25:13.861 "name": "BaseBdev3", 00:25:13.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.861 "is_configured": false, 00:25:13.861 "data_offset": 0, 00:25:13.861 "data_size": 0 00:25:13.861 }, 00:25:13.861 { 00:25:13.861 "name": "BaseBdev4", 00:25:13.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.861 "is_configured": false, 00:25:13.861 "data_offset": 0, 00:25:13.861 "data_size": 0 00:25:13.861 } 00:25:13.861 ] 00:25:13.861 }' 00:25:13.861 04:20:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:13.861 04:20:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.427 04:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:14.689 [2024-07-23 04:20:23.296576] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:14.689 [2024-07-23 04:20:23.296623] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:25:14.689 04:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:15.256 [2024-07-23 04:20:23.793961] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:15.256 [2024-07-23 04:20:23.794016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:15.256 [2024-07-23 04:20:23.794030] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:15.256 [2024-07-23 04:20:23.794054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:15.256 [2024-07-23 04:20:23.794066] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:15.256 [2024-07-23 04:20:23.794082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:15.256 [2024-07-23 04:20:23.794093] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:15.256 [2024-07-23 04:20:23.794110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:15.256 04:20:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:15.514 [2024-07-23 04:20:24.089200] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:15.514 BaseBdev1 00:25:15.514 04:20:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:15.514 04:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:15.514 04:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:15.514 04:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:15.514 04:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:15.514 04:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:15.514 04:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:16.079 04:20:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:16.337 [ 00:25:16.337 { 00:25:16.337 "name": "BaseBdev1", 00:25:16.337 "aliases": [ 00:25:16.337 "90e2e3a3-ea34-485b-bac3-ac07a0ac8e14" 00:25:16.337 ], 00:25:16.337 "product_name": "Malloc disk", 00:25:16.337 "block_size": 512, 00:25:16.337 "num_blocks": 65536, 00:25:16.338 "uuid": "90e2e3a3-ea34-485b-bac3-ac07a0ac8e14", 00:25:16.338 "assigned_rate_limits": { 00:25:16.338 "rw_ios_per_sec": 0, 00:25:16.338 "rw_mbytes_per_sec": 0, 00:25:16.338 "r_mbytes_per_sec": 0, 00:25:16.338 "w_mbytes_per_sec": 0 00:25:16.338 }, 00:25:16.338 "claimed": true, 00:25:16.338 "claim_type": "exclusive_write", 00:25:16.338 "zoned": false, 00:25:16.338 "supported_io_types": { 00:25:16.338 "read": true, 00:25:16.338 "write": true, 00:25:16.338 "unmap": true, 00:25:16.338 "flush": true, 00:25:16.338 "reset": true, 00:25:16.338 "nvme_admin": false, 00:25:16.338 "nvme_io": false, 00:25:16.338 "nvme_io_md": false, 00:25:16.338 "write_zeroes": true, 00:25:16.338 "zcopy": true, 00:25:16.338 "get_zone_info": false, 00:25:16.338 "zone_management": false, 00:25:16.338 "zone_append": false, 00:25:16.338 "compare": false, 00:25:16.338 "compare_and_write": false, 00:25:16.338 "abort": true, 00:25:16.338 "seek_hole": false, 00:25:16.338 "seek_data": false, 00:25:16.338 "copy": true, 00:25:16.338 "nvme_iov_md": false 00:25:16.338 }, 00:25:16.338 "memory_domains": [ 00:25:16.338 { 00:25:16.338 "dma_device_id": "system", 00:25:16.338 "dma_device_type": 1 00:25:16.338 }, 00:25:16.338 { 00:25:16.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:16.338 "dma_device_type": 2 00:25:16.338 } 00:25:16.338 ], 00:25:16.338 "driver_specific": {} 00:25:16.338 } 00:25:16.338 ] 00:25:16.596 04:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:16.596 04:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:16.596 04:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:16.596 04:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:16.596 04:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:16.596 04:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:16.596 04:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:16.596 04:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:16.596 04:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:16.596 04:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:16.596 04:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:16.596 04:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.596 04:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:16.596 04:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:16.596 "name": "Existed_Raid", 00:25:16.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.596 "strip_size_kb": 0, 00:25:16.596 "state": "configuring", 00:25:16.596 "raid_level": "raid1", 00:25:16.596 "superblock": false, 00:25:16.596 "num_base_bdevs": 4, 00:25:16.596 "num_base_bdevs_discovered": 1, 00:25:16.596 "num_base_bdevs_operational": 4, 00:25:16.596 "base_bdevs_list": [ 00:25:16.596 { 00:25:16.596 "name": "BaseBdev1", 00:25:16.596 "uuid": "90e2e3a3-ea34-485b-bac3-ac07a0ac8e14", 00:25:16.596 "is_configured": true, 00:25:16.596 "data_offset": 0, 00:25:16.596 "data_size": 65536 00:25:16.596 }, 00:25:16.596 { 00:25:16.596 "name": "BaseBdev2", 00:25:16.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.596 "is_configured": false, 00:25:16.596 "data_offset": 0, 00:25:16.596 "data_size": 0 00:25:16.596 }, 00:25:16.596 { 00:25:16.596 "name": "BaseBdev3", 00:25:16.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.596 "is_configured": false, 00:25:16.596 "data_offset": 0, 00:25:16.596 "data_size": 0 00:25:16.596 }, 00:25:16.596 { 00:25:16.596 "name": "BaseBdev4", 00:25:16.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.596 "is_configured": false, 00:25:16.596 "data_offset": 0, 00:25:16.596 "data_size": 0 00:25:16.596 } 00:25:16.596 ] 00:25:16.596 }' 00:25:16.596 04:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:16.596 04:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.161 04:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:17.419 [2024-07-23 04:20:26.134752] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:17.419 [2024-07-23 04:20:26.134812] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:25:17.419 04:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:17.676 [2024-07-23 04:20:26.359453] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:17.676 [2024-07-23 04:20:26.361792] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:17.676 [2024-07-23 04:20:26.361833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:17.676 [2024-07-23 04:20:26.361848] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:17.676 [2024-07-23 04:20:26.361864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:17.676 [2024-07-23 04:20:26.361876] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:17.676 [2024-07-23 04:20:26.361894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:17.676 04:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:17.676 04:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:17.676 04:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:17.676 04:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:17.676 04:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:17.676 04:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:17.676 04:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:17.676 04:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:17.676 04:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:17.676 04:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:17.676 04:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:17.676 04:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:17.676 04:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.676 04:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:17.935 04:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:17.935 "name": "Existed_Raid", 00:25:17.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.935 "strip_size_kb": 0, 00:25:17.935 "state": "configuring", 00:25:17.935 "raid_level": "raid1", 00:25:17.935 "superblock": false, 00:25:17.935 "num_base_bdevs": 4, 00:25:17.935 "num_base_bdevs_discovered": 1, 00:25:17.935 "num_base_bdevs_operational": 4, 00:25:17.935 "base_bdevs_list": [ 00:25:17.935 { 00:25:17.935 "name": "BaseBdev1", 00:25:17.935 "uuid": "90e2e3a3-ea34-485b-bac3-ac07a0ac8e14", 00:25:17.935 "is_configured": true, 00:25:17.935 "data_offset": 0, 00:25:17.935 "data_size": 65536 00:25:17.935 }, 00:25:17.935 { 00:25:17.935 "name": "BaseBdev2", 00:25:17.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.935 "is_configured": false, 00:25:17.935 "data_offset": 0, 00:25:17.935 "data_size": 0 00:25:17.935 }, 00:25:17.935 { 00:25:17.935 "name": "BaseBdev3", 00:25:17.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.935 "is_configured": false, 00:25:17.935 "data_offset": 0, 00:25:17.935 "data_size": 0 00:25:17.935 }, 00:25:17.935 { 00:25:17.935 "name": "BaseBdev4", 00:25:17.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.935 "is_configured": false, 00:25:17.935 "data_offset": 0, 00:25:17.935 "data_size": 0 00:25:17.935 } 00:25:17.935 ] 00:25:17.935 }' 00:25:17.935 04:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:17.935 04:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.499 04:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:18.758 [2024-07-23 04:20:27.450046] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:18.758 BaseBdev2 00:25:18.758 04:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:18.758 04:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:18.758 04:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:18.758 04:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:18.758 04:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:18.758 04:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:18.758 04:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:19.016 04:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:19.275 [ 00:25:19.275 { 00:25:19.275 "name": "BaseBdev2", 00:25:19.275 "aliases": [ 00:25:19.275 "e69815bd-35ae-4fec-9d3a-2bda71c36adf" 00:25:19.275 ], 00:25:19.275 "product_name": "Malloc disk", 00:25:19.275 "block_size": 512, 00:25:19.275 "num_blocks": 65536, 00:25:19.275 "uuid": "e69815bd-35ae-4fec-9d3a-2bda71c36adf", 00:25:19.275 "assigned_rate_limits": { 00:25:19.275 "rw_ios_per_sec": 0, 00:25:19.275 "rw_mbytes_per_sec": 0, 00:25:19.275 "r_mbytes_per_sec": 0, 00:25:19.275 "w_mbytes_per_sec": 0 00:25:19.275 }, 00:25:19.275 "claimed": true, 00:25:19.275 "claim_type": "exclusive_write", 00:25:19.275 "zoned": false, 00:25:19.275 "supported_io_types": { 00:25:19.275 "read": true, 00:25:19.275 "write": true, 00:25:19.275 "unmap": true, 00:25:19.275 "flush": true, 00:25:19.275 "reset": true, 00:25:19.275 "nvme_admin": false, 00:25:19.275 "nvme_io": false, 00:25:19.275 "nvme_io_md": false, 00:25:19.275 "write_zeroes": true, 00:25:19.275 "zcopy": true, 00:25:19.275 "get_zone_info": false, 00:25:19.275 "zone_management": false, 00:25:19.275 "zone_append": false, 00:25:19.275 "compare": false, 00:25:19.275 "compare_and_write": false, 00:25:19.275 "abort": true, 00:25:19.275 "seek_hole": false, 00:25:19.275 "seek_data": false, 00:25:19.275 "copy": true, 00:25:19.275 "nvme_iov_md": false 00:25:19.275 }, 00:25:19.275 "memory_domains": [ 00:25:19.275 { 00:25:19.275 "dma_device_id": "system", 00:25:19.275 "dma_device_type": 1 00:25:19.275 }, 00:25:19.275 { 00:25:19.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:19.275 "dma_device_type": 2 00:25:19.275 } 00:25:19.275 ], 00:25:19.275 "driver_specific": {} 00:25:19.275 } 00:25:19.275 ] 00:25:19.275 04:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:19.275 04:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:19.275 04:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:19.275 04:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:19.275 04:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:19.275 04:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:19.275 04:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:19.275 04:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:19.275 04:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:19.275 04:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:19.275 04:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:19.275 04:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:19.275 04:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:19.275 04:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.275 04:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:19.533 04:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:19.533 "name": "Existed_Raid", 00:25:19.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.533 "strip_size_kb": 0, 00:25:19.533 "state": "configuring", 00:25:19.533 "raid_level": "raid1", 00:25:19.533 "superblock": false, 00:25:19.533 "num_base_bdevs": 4, 00:25:19.533 "num_base_bdevs_discovered": 2, 00:25:19.533 "num_base_bdevs_operational": 4, 00:25:19.533 "base_bdevs_list": [ 00:25:19.533 { 00:25:19.533 "name": "BaseBdev1", 00:25:19.533 "uuid": "90e2e3a3-ea34-485b-bac3-ac07a0ac8e14", 00:25:19.533 "is_configured": true, 00:25:19.533 "data_offset": 0, 00:25:19.533 "data_size": 65536 00:25:19.533 }, 00:25:19.533 { 00:25:19.533 "name": "BaseBdev2", 00:25:19.533 "uuid": "e69815bd-35ae-4fec-9d3a-2bda71c36adf", 00:25:19.533 "is_configured": true, 00:25:19.533 "data_offset": 0, 00:25:19.533 "data_size": 65536 00:25:19.533 }, 00:25:19.533 { 00:25:19.533 "name": "BaseBdev3", 00:25:19.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.533 "is_configured": false, 00:25:19.533 "data_offset": 0, 00:25:19.533 "data_size": 0 00:25:19.533 }, 00:25:19.533 { 00:25:19.533 "name": "BaseBdev4", 00:25:19.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.533 "is_configured": false, 00:25:19.533 "data_offset": 0, 00:25:19.533 "data_size": 0 00:25:19.533 } 00:25:19.533 ] 00:25:19.533 }' 00:25:19.533 04:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:19.533 04:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.100 04:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:20.358 [2024-07-23 04:20:28.996072] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:20.358 BaseBdev3 00:25:20.358 04:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:25:20.358 04:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:20.358 04:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:20.358 04:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:20.358 04:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:20.358 04:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:20.358 04:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:20.616 04:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:20.874 [ 00:25:20.874 { 00:25:20.874 "name": "BaseBdev3", 00:25:20.874 "aliases": [ 00:25:20.874 "5d81ccdb-1255-4050-acce-f36925823de3" 00:25:20.874 ], 00:25:20.874 "product_name": "Malloc disk", 00:25:20.874 "block_size": 512, 00:25:20.874 "num_blocks": 65536, 00:25:20.874 "uuid": "5d81ccdb-1255-4050-acce-f36925823de3", 00:25:20.874 "assigned_rate_limits": { 00:25:20.874 "rw_ios_per_sec": 0, 00:25:20.874 "rw_mbytes_per_sec": 0, 00:25:20.874 "r_mbytes_per_sec": 0, 00:25:20.874 "w_mbytes_per_sec": 0 00:25:20.874 }, 00:25:20.874 "claimed": true, 00:25:20.874 "claim_type": "exclusive_write", 00:25:20.874 "zoned": false, 00:25:20.874 "supported_io_types": { 00:25:20.874 "read": true, 00:25:20.874 "write": true, 00:25:20.874 "unmap": true, 00:25:20.874 "flush": true, 00:25:20.874 "reset": true, 00:25:20.874 "nvme_admin": false, 00:25:20.874 "nvme_io": false, 00:25:20.874 "nvme_io_md": false, 00:25:20.874 "write_zeroes": true, 00:25:20.874 "zcopy": true, 00:25:20.874 "get_zone_info": false, 00:25:20.874 "zone_management": false, 00:25:20.874 "zone_append": false, 00:25:20.874 "compare": false, 00:25:20.874 "compare_and_write": false, 00:25:20.874 "abort": true, 00:25:20.874 "seek_hole": false, 00:25:20.874 "seek_data": false, 00:25:20.874 "copy": true, 00:25:20.874 "nvme_iov_md": false 00:25:20.874 }, 00:25:20.874 "memory_domains": [ 00:25:20.874 { 00:25:20.874 "dma_device_id": "system", 00:25:20.874 "dma_device_type": 1 00:25:20.874 }, 00:25:20.874 { 00:25:20.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.874 "dma_device_type": 2 00:25:20.874 } 00:25:20.874 ], 00:25:20.874 "driver_specific": {} 00:25:20.874 } 00:25:20.874 ] 00:25:20.874 04:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:20.874 04:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:20.874 04:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:20.874 04:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:20.874 04:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:20.874 04:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:20.874 04:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:20.874 04:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:20.875 04:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:20.875 04:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:20.875 04:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:20.875 04:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:20.875 04:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:20.875 04:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.875 04:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:20.875 04:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:20.875 "name": "Existed_Raid", 00:25:20.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.875 "strip_size_kb": 0, 00:25:20.875 "state": "configuring", 00:25:20.875 "raid_level": "raid1", 00:25:20.875 "superblock": false, 00:25:20.875 "num_base_bdevs": 4, 00:25:20.875 "num_base_bdevs_discovered": 3, 00:25:20.875 "num_base_bdevs_operational": 4, 00:25:20.875 "base_bdevs_list": [ 00:25:20.875 { 00:25:20.875 "name": "BaseBdev1", 00:25:20.875 "uuid": "90e2e3a3-ea34-485b-bac3-ac07a0ac8e14", 00:25:20.875 "is_configured": true, 00:25:20.875 "data_offset": 0, 00:25:20.875 "data_size": 65536 00:25:20.875 }, 00:25:20.875 { 00:25:20.875 "name": "BaseBdev2", 00:25:20.875 "uuid": "e69815bd-35ae-4fec-9d3a-2bda71c36adf", 00:25:20.875 "is_configured": true, 00:25:20.875 "data_offset": 0, 00:25:20.875 "data_size": 65536 00:25:20.875 }, 00:25:20.875 { 00:25:20.875 "name": "BaseBdev3", 00:25:20.875 "uuid": "5d81ccdb-1255-4050-acce-f36925823de3", 00:25:20.875 "is_configured": true, 00:25:20.875 "data_offset": 0, 00:25:20.875 "data_size": 65536 00:25:20.875 }, 00:25:20.875 { 00:25:20.875 "name": "BaseBdev4", 00:25:20.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.875 "is_configured": false, 00:25:20.875 "data_offset": 0, 00:25:20.875 "data_size": 0 00:25:20.875 } 00:25:20.875 ] 00:25:20.875 }' 00:25:20.875 04:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:20.875 04:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.441 04:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:22.008 [2024-07-23 04:20:30.489761] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:22.008 [2024-07-23 04:20:30.489825] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:25:22.008 [2024-07-23 04:20:30.489842] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:22.008 [2024-07-23 04:20:30.490200] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:25:22.008 [2024-07-23 04:20:30.490466] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:25:22.008 [2024-07-23 04:20:30.490487] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:25:22.008 [2024-07-23 04:20:30.490814] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:22.008 BaseBdev4 00:25:22.008 04:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:25:22.008 04:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:25:22.008 04:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:22.008 04:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:22.008 04:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:22.008 04:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:22.008 04:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:22.008 04:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:22.266 [ 00:25:22.266 { 00:25:22.266 "name": "BaseBdev4", 00:25:22.266 "aliases": [ 00:25:22.266 "acc5d165-df6c-4f5c-9717-cf827bd88508" 00:25:22.266 ], 00:25:22.266 "product_name": "Malloc disk", 00:25:22.266 "block_size": 512, 00:25:22.266 "num_blocks": 65536, 00:25:22.266 "uuid": "acc5d165-df6c-4f5c-9717-cf827bd88508", 00:25:22.266 "assigned_rate_limits": { 00:25:22.266 "rw_ios_per_sec": 0, 00:25:22.266 "rw_mbytes_per_sec": 0, 00:25:22.266 "r_mbytes_per_sec": 0, 00:25:22.266 "w_mbytes_per_sec": 0 00:25:22.266 }, 00:25:22.266 "claimed": true, 00:25:22.266 "claim_type": "exclusive_write", 00:25:22.266 "zoned": false, 00:25:22.266 "supported_io_types": { 00:25:22.267 "read": true, 00:25:22.267 "write": true, 00:25:22.267 "unmap": true, 00:25:22.267 "flush": true, 00:25:22.267 "reset": true, 00:25:22.267 "nvme_admin": false, 00:25:22.267 "nvme_io": false, 00:25:22.267 "nvme_io_md": false, 00:25:22.267 "write_zeroes": true, 00:25:22.267 "zcopy": true, 00:25:22.267 "get_zone_info": false, 00:25:22.267 "zone_management": false, 00:25:22.267 "zone_append": false, 00:25:22.267 "compare": false, 00:25:22.267 "compare_and_write": false, 00:25:22.267 "abort": true, 00:25:22.267 "seek_hole": false, 00:25:22.267 "seek_data": false, 00:25:22.267 "copy": true, 00:25:22.267 "nvme_iov_md": false 00:25:22.267 }, 00:25:22.267 "memory_domains": [ 00:25:22.267 { 00:25:22.267 "dma_device_id": "system", 00:25:22.267 "dma_device_type": 1 00:25:22.267 }, 00:25:22.267 { 00:25:22.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:22.267 "dma_device_type": 2 00:25:22.267 } 00:25:22.267 ], 00:25:22.267 "driver_specific": {} 00:25:22.267 } 00:25:22.267 ] 00:25:22.267 04:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:22.267 04:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:22.267 04:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:22.267 04:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:22.267 04:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:22.267 04:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:22.267 04:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:22.267 04:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:22.267 04:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:22.267 04:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:22.267 04:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:22.267 04:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:22.267 04:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:22.267 04:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.267 04:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:22.526 04:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:22.526 "name": "Existed_Raid", 00:25:22.526 "uuid": "749b4e4a-b5c4-492a-acef-45771ff4f69b", 00:25:22.526 "strip_size_kb": 0, 00:25:22.526 "state": "online", 00:25:22.526 "raid_level": "raid1", 00:25:22.526 "superblock": false, 00:25:22.526 "num_base_bdevs": 4, 00:25:22.526 "num_base_bdevs_discovered": 4, 00:25:22.526 "num_base_bdevs_operational": 4, 00:25:22.526 "base_bdevs_list": [ 00:25:22.526 { 00:25:22.526 "name": "BaseBdev1", 00:25:22.526 "uuid": "90e2e3a3-ea34-485b-bac3-ac07a0ac8e14", 00:25:22.526 "is_configured": true, 00:25:22.526 "data_offset": 0, 00:25:22.526 "data_size": 65536 00:25:22.526 }, 00:25:22.526 { 00:25:22.526 "name": "BaseBdev2", 00:25:22.526 "uuid": "e69815bd-35ae-4fec-9d3a-2bda71c36adf", 00:25:22.526 "is_configured": true, 00:25:22.526 "data_offset": 0, 00:25:22.526 "data_size": 65536 00:25:22.526 }, 00:25:22.526 { 00:25:22.526 "name": "BaseBdev3", 00:25:22.526 "uuid": "5d81ccdb-1255-4050-acce-f36925823de3", 00:25:22.526 "is_configured": true, 00:25:22.526 "data_offset": 0, 00:25:22.526 "data_size": 65536 00:25:22.526 }, 00:25:22.526 { 00:25:22.526 "name": "BaseBdev4", 00:25:22.526 "uuid": "acc5d165-df6c-4f5c-9717-cf827bd88508", 00:25:22.526 "is_configured": true, 00:25:22.526 "data_offset": 0, 00:25:22.526 "data_size": 65536 00:25:22.526 } 00:25:22.526 ] 00:25:22.526 }' 00:25:22.526 04:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:22.526 04:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.092 04:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:23.092 04:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:23.092 04:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:23.092 04:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:23.092 04:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:23.092 04:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:23.092 04:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:23.092 04:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:23.350 [2024-07-23 04:20:31.910254] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:23.350 04:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:23.350 "name": "Existed_Raid", 00:25:23.350 "aliases": [ 00:25:23.350 "749b4e4a-b5c4-492a-acef-45771ff4f69b" 00:25:23.350 ], 00:25:23.350 "product_name": "Raid Volume", 00:25:23.350 "block_size": 512, 00:25:23.350 "num_blocks": 65536, 00:25:23.350 "uuid": "749b4e4a-b5c4-492a-acef-45771ff4f69b", 00:25:23.350 "assigned_rate_limits": { 00:25:23.350 "rw_ios_per_sec": 0, 00:25:23.350 "rw_mbytes_per_sec": 0, 00:25:23.350 "r_mbytes_per_sec": 0, 00:25:23.350 "w_mbytes_per_sec": 0 00:25:23.350 }, 00:25:23.350 "claimed": false, 00:25:23.350 "zoned": false, 00:25:23.350 "supported_io_types": { 00:25:23.350 "read": true, 00:25:23.350 "write": true, 00:25:23.350 "unmap": false, 00:25:23.350 "flush": false, 00:25:23.350 "reset": true, 00:25:23.350 "nvme_admin": false, 00:25:23.350 "nvme_io": false, 00:25:23.350 "nvme_io_md": false, 00:25:23.350 "write_zeroes": true, 00:25:23.350 "zcopy": false, 00:25:23.350 "get_zone_info": false, 00:25:23.350 "zone_management": false, 00:25:23.350 "zone_append": false, 00:25:23.350 "compare": false, 00:25:23.350 "compare_and_write": false, 00:25:23.350 "abort": false, 00:25:23.350 "seek_hole": false, 00:25:23.350 "seek_data": false, 00:25:23.350 "copy": false, 00:25:23.350 "nvme_iov_md": false 00:25:23.350 }, 00:25:23.350 "memory_domains": [ 00:25:23.350 { 00:25:23.350 "dma_device_id": "system", 00:25:23.350 "dma_device_type": 1 00:25:23.350 }, 00:25:23.350 { 00:25:23.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:23.350 "dma_device_type": 2 00:25:23.350 }, 00:25:23.350 { 00:25:23.350 "dma_device_id": "system", 00:25:23.350 "dma_device_type": 1 00:25:23.350 }, 00:25:23.350 { 00:25:23.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:23.351 "dma_device_type": 2 00:25:23.351 }, 00:25:23.351 { 00:25:23.351 "dma_device_id": "system", 00:25:23.351 "dma_device_type": 1 00:25:23.351 }, 00:25:23.351 { 00:25:23.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:23.351 "dma_device_type": 2 00:25:23.351 }, 00:25:23.351 { 00:25:23.351 "dma_device_id": "system", 00:25:23.351 "dma_device_type": 1 00:25:23.351 }, 00:25:23.351 { 00:25:23.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:23.351 "dma_device_type": 2 00:25:23.351 } 00:25:23.351 ], 00:25:23.351 "driver_specific": { 00:25:23.351 "raid": { 00:25:23.351 "uuid": "749b4e4a-b5c4-492a-acef-45771ff4f69b", 00:25:23.351 "strip_size_kb": 0, 00:25:23.351 "state": "online", 00:25:23.351 "raid_level": "raid1", 00:25:23.351 "superblock": false, 00:25:23.351 "num_base_bdevs": 4, 00:25:23.351 "num_base_bdevs_discovered": 4, 00:25:23.351 "num_base_bdevs_operational": 4, 00:25:23.351 "base_bdevs_list": [ 00:25:23.351 { 00:25:23.351 "name": "BaseBdev1", 00:25:23.351 "uuid": "90e2e3a3-ea34-485b-bac3-ac07a0ac8e14", 00:25:23.351 "is_configured": true, 00:25:23.351 "data_offset": 0, 00:25:23.351 "data_size": 65536 00:25:23.351 }, 00:25:23.351 { 00:25:23.351 "name": "BaseBdev2", 00:25:23.351 "uuid": "e69815bd-35ae-4fec-9d3a-2bda71c36adf", 00:25:23.351 "is_configured": true, 00:25:23.351 "data_offset": 0, 00:25:23.351 "data_size": 65536 00:25:23.351 }, 00:25:23.351 { 00:25:23.351 "name": "BaseBdev3", 00:25:23.351 "uuid": "5d81ccdb-1255-4050-acce-f36925823de3", 00:25:23.351 "is_configured": true, 00:25:23.351 "data_offset": 0, 00:25:23.351 "data_size": 65536 00:25:23.351 }, 00:25:23.351 { 00:25:23.351 "name": "BaseBdev4", 00:25:23.351 "uuid": "acc5d165-df6c-4f5c-9717-cf827bd88508", 00:25:23.351 "is_configured": true, 00:25:23.351 "data_offset": 0, 00:25:23.351 "data_size": 65536 00:25:23.351 } 00:25:23.351 ] 00:25:23.351 } 00:25:23.351 } 00:25:23.351 }' 00:25:23.351 04:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:23.351 04:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:23.351 BaseBdev2 00:25:23.351 BaseBdev3 00:25:23.351 BaseBdev4' 00:25:23.351 04:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:23.351 04:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:23.351 04:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:23.609 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:23.609 "name": "BaseBdev1", 00:25:23.609 "aliases": [ 00:25:23.609 "90e2e3a3-ea34-485b-bac3-ac07a0ac8e14" 00:25:23.609 ], 00:25:23.609 "product_name": "Malloc disk", 00:25:23.609 "block_size": 512, 00:25:23.609 "num_blocks": 65536, 00:25:23.609 "uuid": "90e2e3a3-ea34-485b-bac3-ac07a0ac8e14", 00:25:23.609 "assigned_rate_limits": { 00:25:23.609 "rw_ios_per_sec": 0, 00:25:23.609 "rw_mbytes_per_sec": 0, 00:25:23.609 "r_mbytes_per_sec": 0, 00:25:23.609 "w_mbytes_per_sec": 0 00:25:23.609 }, 00:25:23.609 "claimed": true, 00:25:23.609 "claim_type": "exclusive_write", 00:25:23.609 "zoned": false, 00:25:23.609 "supported_io_types": { 00:25:23.609 "read": true, 00:25:23.609 "write": true, 00:25:23.609 "unmap": true, 00:25:23.609 "flush": true, 00:25:23.609 "reset": true, 00:25:23.609 "nvme_admin": false, 00:25:23.609 "nvme_io": false, 00:25:23.609 "nvme_io_md": false, 00:25:23.609 "write_zeroes": true, 00:25:23.609 "zcopy": true, 00:25:23.609 "get_zone_info": false, 00:25:23.609 "zone_management": false, 00:25:23.609 "zone_append": false, 00:25:23.609 "compare": false, 00:25:23.609 "compare_and_write": false, 00:25:23.609 "abort": true, 00:25:23.609 "seek_hole": false, 00:25:23.609 "seek_data": false, 00:25:23.609 "copy": true, 00:25:23.609 "nvme_iov_md": false 00:25:23.609 }, 00:25:23.609 "memory_domains": [ 00:25:23.609 { 00:25:23.609 "dma_device_id": "system", 00:25:23.609 "dma_device_type": 1 00:25:23.609 }, 00:25:23.609 { 00:25:23.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:23.609 "dma_device_type": 2 00:25:23.609 } 00:25:23.609 ], 00:25:23.609 "driver_specific": {} 00:25:23.609 }' 00:25:23.609 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:23.609 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:23.609 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:23.609 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:23.609 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:23.609 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:23.609 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:23.867 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:23.867 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:23.867 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:23.867 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:23.867 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:23.867 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:23.867 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:23.867 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:24.133 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:24.133 "name": "BaseBdev2", 00:25:24.133 "aliases": [ 00:25:24.133 "e69815bd-35ae-4fec-9d3a-2bda71c36adf" 00:25:24.133 ], 00:25:24.133 "product_name": "Malloc disk", 00:25:24.133 "block_size": 512, 00:25:24.133 "num_blocks": 65536, 00:25:24.133 "uuid": "e69815bd-35ae-4fec-9d3a-2bda71c36adf", 00:25:24.133 "assigned_rate_limits": { 00:25:24.133 "rw_ios_per_sec": 0, 00:25:24.133 "rw_mbytes_per_sec": 0, 00:25:24.133 "r_mbytes_per_sec": 0, 00:25:24.133 "w_mbytes_per_sec": 0 00:25:24.133 }, 00:25:24.133 "claimed": true, 00:25:24.133 "claim_type": "exclusive_write", 00:25:24.133 "zoned": false, 00:25:24.133 "supported_io_types": { 00:25:24.133 "read": true, 00:25:24.133 "write": true, 00:25:24.133 "unmap": true, 00:25:24.133 "flush": true, 00:25:24.133 "reset": true, 00:25:24.133 "nvme_admin": false, 00:25:24.133 "nvme_io": false, 00:25:24.133 "nvme_io_md": false, 00:25:24.133 "write_zeroes": true, 00:25:24.133 "zcopy": true, 00:25:24.133 "get_zone_info": false, 00:25:24.133 "zone_management": false, 00:25:24.133 "zone_append": false, 00:25:24.133 "compare": false, 00:25:24.133 "compare_and_write": false, 00:25:24.133 "abort": true, 00:25:24.133 "seek_hole": false, 00:25:24.133 "seek_data": false, 00:25:24.133 "copy": true, 00:25:24.133 "nvme_iov_md": false 00:25:24.133 }, 00:25:24.133 "memory_domains": [ 00:25:24.133 { 00:25:24.133 "dma_device_id": "system", 00:25:24.133 "dma_device_type": 1 00:25:24.133 }, 00:25:24.133 { 00:25:24.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:24.133 "dma_device_type": 2 00:25:24.133 } 00:25:24.133 ], 00:25:24.133 "driver_specific": {} 00:25:24.133 }' 00:25:24.133 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:24.133 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:24.133 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:24.133 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:24.133 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:24.392 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:24.392 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:24.392 04:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:24.392 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:24.392 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:24.392 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:24.392 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:24.392 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:24.392 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:24.392 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:24.656 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:24.656 "name": "BaseBdev3", 00:25:24.656 "aliases": [ 00:25:24.656 "5d81ccdb-1255-4050-acce-f36925823de3" 00:25:24.656 ], 00:25:24.656 "product_name": "Malloc disk", 00:25:24.656 "block_size": 512, 00:25:24.656 "num_blocks": 65536, 00:25:24.656 "uuid": "5d81ccdb-1255-4050-acce-f36925823de3", 00:25:24.656 "assigned_rate_limits": { 00:25:24.656 "rw_ios_per_sec": 0, 00:25:24.656 "rw_mbytes_per_sec": 0, 00:25:24.656 "r_mbytes_per_sec": 0, 00:25:24.656 "w_mbytes_per_sec": 0 00:25:24.656 }, 00:25:24.656 "claimed": true, 00:25:24.656 "claim_type": "exclusive_write", 00:25:24.656 "zoned": false, 00:25:24.656 "supported_io_types": { 00:25:24.656 "read": true, 00:25:24.656 "write": true, 00:25:24.656 "unmap": true, 00:25:24.656 "flush": true, 00:25:24.656 "reset": true, 00:25:24.656 "nvme_admin": false, 00:25:24.656 "nvme_io": false, 00:25:24.656 "nvme_io_md": false, 00:25:24.656 "write_zeroes": true, 00:25:24.656 "zcopy": true, 00:25:24.656 "get_zone_info": false, 00:25:24.656 "zone_management": false, 00:25:24.656 "zone_append": false, 00:25:24.656 "compare": false, 00:25:24.656 "compare_and_write": false, 00:25:24.656 "abort": true, 00:25:24.656 "seek_hole": false, 00:25:24.656 "seek_data": false, 00:25:24.656 "copy": true, 00:25:24.656 "nvme_iov_md": false 00:25:24.656 }, 00:25:24.656 "memory_domains": [ 00:25:24.656 { 00:25:24.656 "dma_device_id": "system", 00:25:24.656 "dma_device_type": 1 00:25:24.656 }, 00:25:24.656 { 00:25:24.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:24.656 "dma_device_type": 2 00:25:24.656 } 00:25:24.656 ], 00:25:24.656 "driver_specific": {} 00:25:24.656 }' 00:25:24.656 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:24.656 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:24.656 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:24.656 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:24.915 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:24.915 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:24.915 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:24.915 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:24.915 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:24.915 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:24.915 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:24.915 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:24.915 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:24.915 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:24.915 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:25.173 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:25.173 "name": "BaseBdev4", 00:25:25.173 "aliases": [ 00:25:25.173 "acc5d165-df6c-4f5c-9717-cf827bd88508" 00:25:25.173 ], 00:25:25.173 "product_name": "Malloc disk", 00:25:25.173 "block_size": 512, 00:25:25.173 "num_blocks": 65536, 00:25:25.173 "uuid": "acc5d165-df6c-4f5c-9717-cf827bd88508", 00:25:25.173 "assigned_rate_limits": { 00:25:25.173 "rw_ios_per_sec": 0, 00:25:25.173 "rw_mbytes_per_sec": 0, 00:25:25.173 "r_mbytes_per_sec": 0, 00:25:25.173 "w_mbytes_per_sec": 0 00:25:25.173 }, 00:25:25.173 "claimed": true, 00:25:25.173 "claim_type": "exclusive_write", 00:25:25.173 "zoned": false, 00:25:25.173 "supported_io_types": { 00:25:25.173 "read": true, 00:25:25.173 "write": true, 00:25:25.173 "unmap": true, 00:25:25.173 "flush": true, 00:25:25.173 "reset": true, 00:25:25.173 "nvme_admin": false, 00:25:25.173 "nvme_io": false, 00:25:25.173 "nvme_io_md": false, 00:25:25.173 "write_zeroes": true, 00:25:25.173 "zcopy": true, 00:25:25.173 "get_zone_info": false, 00:25:25.173 "zone_management": false, 00:25:25.173 "zone_append": false, 00:25:25.173 "compare": false, 00:25:25.173 "compare_and_write": false, 00:25:25.173 "abort": true, 00:25:25.173 "seek_hole": false, 00:25:25.173 "seek_data": false, 00:25:25.173 "copy": true, 00:25:25.173 "nvme_iov_md": false 00:25:25.173 }, 00:25:25.173 "memory_domains": [ 00:25:25.173 { 00:25:25.173 "dma_device_id": "system", 00:25:25.173 "dma_device_type": 1 00:25:25.173 }, 00:25:25.173 { 00:25:25.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:25.173 "dma_device_type": 2 00:25:25.173 } 00:25:25.173 ], 00:25:25.173 "driver_specific": {} 00:25:25.173 }' 00:25:25.173 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:25.173 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:25.431 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:25.431 04:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:25.431 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:25.431 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:25.431 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:25.431 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:25.431 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:25.431 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:25.431 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:25.690 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:25.690 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:25.690 [2024-07-23 04:20:34.360526] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:25.690 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:25.690 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:25:25.690 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:25.690 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:25:25.690 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:25:25.690 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:25:25.690 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:25.690 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:25.690 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:25.690 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:25.690 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:25.690 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:25.690 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:25.690 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:25.690 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:25.690 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.690 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:25.948 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:25.948 "name": "Existed_Raid", 00:25:25.948 "uuid": "749b4e4a-b5c4-492a-acef-45771ff4f69b", 00:25:25.948 "strip_size_kb": 0, 00:25:25.948 "state": "online", 00:25:25.948 "raid_level": "raid1", 00:25:25.948 "superblock": false, 00:25:25.948 "num_base_bdevs": 4, 00:25:25.948 "num_base_bdevs_discovered": 3, 00:25:25.948 "num_base_bdevs_operational": 3, 00:25:25.948 "base_bdevs_list": [ 00:25:25.948 { 00:25:25.948 "name": null, 00:25:25.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.948 "is_configured": false, 00:25:25.948 "data_offset": 0, 00:25:25.948 "data_size": 65536 00:25:25.948 }, 00:25:25.948 { 00:25:25.948 "name": "BaseBdev2", 00:25:25.948 "uuid": "e69815bd-35ae-4fec-9d3a-2bda71c36adf", 00:25:25.948 "is_configured": true, 00:25:25.948 "data_offset": 0, 00:25:25.948 "data_size": 65536 00:25:25.948 }, 00:25:25.948 { 00:25:25.948 "name": "BaseBdev3", 00:25:25.948 "uuid": "5d81ccdb-1255-4050-acce-f36925823de3", 00:25:25.948 "is_configured": true, 00:25:25.948 "data_offset": 0, 00:25:25.948 "data_size": 65536 00:25:25.948 }, 00:25:25.948 { 00:25:25.948 "name": "BaseBdev4", 00:25:25.948 "uuid": "acc5d165-df6c-4f5c-9717-cf827bd88508", 00:25:25.948 "is_configured": true, 00:25:25.948 "data_offset": 0, 00:25:25.948 "data_size": 65536 00:25:25.948 } 00:25:25.948 ] 00:25:25.948 }' 00:25:25.948 04:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:25.948 04:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.515 04:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:26.515 04:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:26.515 04:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:26.515 04:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.773 04:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:26.773 04:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:26.773 04:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:27.032 [2024-07-23 04:20:35.666492] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:27.290 04:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:27.290 04:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:27.290 04:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.290 04:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:27.290 04:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:27.290 04:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:27.290 04:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:27.548 [2024-07-23 04:20:36.252755] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:27.806 04:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:27.806 04:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:27.806 04:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.806 04:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:28.065 04:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:28.065 04:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:28.065 04:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:28.065 [2024-07-23 04:20:36.842247] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:28.065 [2024-07-23 04:20:36.842366] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:28.323 [2024-07-23 04:20:36.975083] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:28.323 [2024-07-23 04:20:36.975154] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:28.323 [2024-07-23 04:20:36.975175] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:25:28.323 04:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:28.323 04:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:28.323 04:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.323 04:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:28.582 04:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:28.582 04:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:28.582 04:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:25:28.582 04:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:25:28.582 04:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:28.582 04:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:28.840 BaseBdev2 00:25:28.840 04:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:25:28.840 04:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:28.840 04:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:28.840 04:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:28.840 04:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:28.840 04:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:28.840 04:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:29.099 04:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:29.358 [ 00:25:29.358 { 00:25:29.358 "name": "BaseBdev2", 00:25:29.358 "aliases": [ 00:25:29.358 "22e7507a-b760-4773-aef2-c4681e696993" 00:25:29.358 ], 00:25:29.358 "product_name": "Malloc disk", 00:25:29.358 "block_size": 512, 00:25:29.358 "num_blocks": 65536, 00:25:29.358 "uuid": "22e7507a-b760-4773-aef2-c4681e696993", 00:25:29.358 "assigned_rate_limits": { 00:25:29.358 "rw_ios_per_sec": 0, 00:25:29.358 "rw_mbytes_per_sec": 0, 00:25:29.358 "r_mbytes_per_sec": 0, 00:25:29.358 "w_mbytes_per_sec": 0 00:25:29.358 }, 00:25:29.358 "claimed": false, 00:25:29.358 "zoned": false, 00:25:29.358 "supported_io_types": { 00:25:29.358 "read": true, 00:25:29.358 "write": true, 00:25:29.358 "unmap": true, 00:25:29.358 "flush": true, 00:25:29.358 "reset": true, 00:25:29.358 "nvme_admin": false, 00:25:29.358 "nvme_io": false, 00:25:29.358 "nvme_io_md": false, 00:25:29.358 "write_zeroes": true, 00:25:29.358 "zcopy": true, 00:25:29.358 "get_zone_info": false, 00:25:29.358 "zone_management": false, 00:25:29.358 "zone_append": false, 00:25:29.358 "compare": false, 00:25:29.358 "compare_and_write": false, 00:25:29.358 "abort": true, 00:25:29.358 "seek_hole": false, 00:25:29.358 "seek_data": false, 00:25:29.358 "copy": true, 00:25:29.358 "nvme_iov_md": false 00:25:29.358 }, 00:25:29.358 "memory_domains": [ 00:25:29.358 { 00:25:29.358 "dma_device_id": "system", 00:25:29.358 "dma_device_type": 1 00:25:29.358 }, 00:25:29.358 { 00:25:29.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:29.358 "dma_device_type": 2 00:25:29.358 } 00:25:29.358 ], 00:25:29.358 "driver_specific": {} 00:25:29.358 } 00:25:29.358 ] 00:25:29.358 04:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:29.358 04:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:29.358 04:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:29.358 04:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:29.616 BaseBdev3 00:25:29.616 04:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:25:29.616 04:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:29.616 04:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:29.616 04:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:29.616 04:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:29.616 04:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:29.616 04:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:29.875 04:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:30.134 [ 00:25:30.134 { 00:25:30.134 "name": "BaseBdev3", 00:25:30.134 "aliases": [ 00:25:30.134 "cd382e66-3266-4903-a0dc-62879c11cb03" 00:25:30.134 ], 00:25:30.134 "product_name": "Malloc disk", 00:25:30.134 "block_size": 512, 00:25:30.134 "num_blocks": 65536, 00:25:30.134 "uuid": "cd382e66-3266-4903-a0dc-62879c11cb03", 00:25:30.134 "assigned_rate_limits": { 00:25:30.134 "rw_ios_per_sec": 0, 00:25:30.134 "rw_mbytes_per_sec": 0, 00:25:30.134 "r_mbytes_per_sec": 0, 00:25:30.134 "w_mbytes_per_sec": 0 00:25:30.134 }, 00:25:30.134 "claimed": false, 00:25:30.134 "zoned": false, 00:25:30.134 "supported_io_types": { 00:25:30.134 "read": true, 00:25:30.134 "write": true, 00:25:30.134 "unmap": true, 00:25:30.134 "flush": true, 00:25:30.134 "reset": true, 00:25:30.134 "nvme_admin": false, 00:25:30.134 "nvme_io": false, 00:25:30.134 "nvme_io_md": false, 00:25:30.134 "write_zeroes": true, 00:25:30.134 "zcopy": true, 00:25:30.134 "get_zone_info": false, 00:25:30.134 "zone_management": false, 00:25:30.134 "zone_append": false, 00:25:30.134 "compare": false, 00:25:30.134 "compare_and_write": false, 00:25:30.134 "abort": true, 00:25:30.134 "seek_hole": false, 00:25:30.134 "seek_data": false, 00:25:30.134 "copy": true, 00:25:30.134 "nvme_iov_md": false 00:25:30.134 }, 00:25:30.134 "memory_domains": [ 00:25:30.134 { 00:25:30.134 "dma_device_id": "system", 00:25:30.134 "dma_device_type": 1 00:25:30.134 }, 00:25:30.134 { 00:25:30.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.134 "dma_device_type": 2 00:25:30.134 } 00:25:30.134 ], 00:25:30.134 "driver_specific": {} 00:25:30.134 } 00:25:30.134 ] 00:25:30.134 04:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:30.134 04:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:30.134 04:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:30.134 04:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:30.392 BaseBdev4 00:25:30.392 04:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:25:30.392 04:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:25:30.392 04:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:30.392 04:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:30.392 04:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:30.393 04:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:30.393 04:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:30.393 04:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:30.651 [ 00:25:30.651 { 00:25:30.651 "name": "BaseBdev4", 00:25:30.651 "aliases": [ 00:25:30.651 "942ecde8-ceac-4833-864f-039b6cba7148" 00:25:30.651 ], 00:25:30.651 "product_name": "Malloc disk", 00:25:30.651 "block_size": 512, 00:25:30.651 "num_blocks": 65536, 00:25:30.651 "uuid": "942ecde8-ceac-4833-864f-039b6cba7148", 00:25:30.651 "assigned_rate_limits": { 00:25:30.651 "rw_ios_per_sec": 0, 00:25:30.651 "rw_mbytes_per_sec": 0, 00:25:30.651 "r_mbytes_per_sec": 0, 00:25:30.651 "w_mbytes_per_sec": 0 00:25:30.651 }, 00:25:30.651 "claimed": false, 00:25:30.651 "zoned": false, 00:25:30.651 "supported_io_types": { 00:25:30.651 "read": true, 00:25:30.651 "write": true, 00:25:30.651 "unmap": true, 00:25:30.651 "flush": true, 00:25:30.651 "reset": true, 00:25:30.651 "nvme_admin": false, 00:25:30.651 "nvme_io": false, 00:25:30.651 "nvme_io_md": false, 00:25:30.651 "write_zeroes": true, 00:25:30.651 "zcopy": true, 00:25:30.651 "get_zone_info": false, 00:25:30.651 "zone_management": false, 00:25:30.651 "zone_append": false, 00:25:30.651 "compare": false, 00:25:30.651 "compare_and_write": false, 00:25:30.651 "abort": true, 00:25:30.651 "seek_hole": false, 00:25:30.651 "seek_data": false, 00:25:30.651 "copy": true, 00:25:30.651 "nvme_iov_md": false 00:25:30.651 }, 00:25:30.651 "memory_domains": [ 00:25:30.651 { 00:25:30.651 "dma_device_id": "system", 00:25:30.651 "dma_device_type": 1 00:25:30.651 }, 00:25:30.651 { 00:25:30.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.651 "dma_device_type": 2 00:25:30.651 } 00:25:30.651 ], 00:25:30.651 "driver_specific": {} 00:25:30.651 } 00:25:30.651 ] 00:25:30.651 04:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:30.651 04:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:30.651 04:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:30.651 04:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:30.910 [2024-07-23 04:20:39.594641] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:30.910 [2024-07-23 04:20:39.594699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:30.910 [2024-07-23 04:20:39.594731] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:30.910 [2024-07-23 04:20:39.597154] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:30.910 [2024-07-23 04:20:39.597218] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:30.910 04:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:30.910 04:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:30.910 04:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:30.910 04:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:30.910 04:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:30.910 04:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:30.910 04:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:30.910 04:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:30.910 04:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:30.910 04:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:30.910 04:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.910 04:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:31.169 04:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:31.169 "name": "Existed_Raid", 00:25:31.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.169 "strip_size_kb": 0, 00:25:31.169 "state": "configuring", 00:25:31.169 "raid_level": "raid1", 00:25:31.169 "superblock": false, 00:25:31.169 "num_base_bdevs": 4, 00:25:31.169 "num_base_bdevs_discovered": 3, 00:25:31.169 "num_base_bdevs_operational": 4, 00:25:31.169 "base_bdevs_list": [ 00:25:31.169 { 00:25:31.169 "name": "BaseBdev1", 00:25:31.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.169 "is_configured": false, 00:25:31.169 "data_offset": 0, 00:25:31.169 "data_size": 0 00:25:31.169 }, 00:25:31.169 { 00:25:31.169 "name": "BaseBdev2", 00:25:31.169 "uuid": "22e7507a-b760-4773-aef2-c4681e696993", 00:25:31.169 "is_configured": true, 00:25:31.169 "data_offset": 0, 00:25:31.169 "data_size": 65536 00:25:31.169 }, 00:25:31.169 { 00:25:31.169 "name": "BaseBdev3", 00:25:31.169 "uuid": "cd382e66-3266-4903-a0dc-62879c11cb03", 00:25:31.169 "is_configured": true, 00:25:31.169 "data_offset": 0, 00:25:31.169 "data_size": 65536 00:25:31.169 }, 00:25:31.169 { 00:25:31.169 "name": "BaseBdev4", 00:25:31.169 "uuid": "942ecde8-ceac-4833-864f-039b6cba7148", 00:25:31.169 "is_configured": true, 00:25:31.169 "data_offset": 0, 00:25:31.169 "data_size": 65536 00:25:31.169 } 00:25:31.169 ] 00:25:31.169 }' 00:25:31.169 04:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:31.169 04:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.736 04:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:31.994 [2024-07-23 04:20:40.585289] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:31.994 04:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:31.994 04:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:31.994 04:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:31.994 04:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:31.994 04:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:31.994 04:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:31.994 04:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:31.994 04:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:31.994 04:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:31.994 04:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:31.994 04:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.994 04:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:32.253 04:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:32.253 "name": "Existed_Raid", 00:25:32.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.253 "strip_size_kb": 0, 00:25:32.253 "state": "configuring", 00:25:32.253 "raid_level": "raid1", 00:25:32.253 "superblock": false, 00:25:32.253 "num_base_bdevs": 4, 00:25:32.253 "num_base_bdevs_discovered": 2, 00:25:32.253 "num_base_bdevs_operational": 4, 00:25:32.253 "base_bdevs_list": [ 00:25:32.253 { 00:25:32.253 "name": "BaseBdev1", 00:25:32.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.253 "is_configured": false, 00:25:32.253 "data_offset": 0, 00:25:32.253 "data_size": 0 00:25:32.253 }, 00:25:32.253 { 00:25:32.253 "name": null, 00:25:32.253 "uuid": "22e7507a-b760-4773-aef2-c4681e696993", 00:25:32.253 "is_configured": false, 00:25:32.253 "data_offset": 0, 00:25:32.253 "data_size": 65536 00:25:32.253 }, 00:25:32.253 { 00:25:32.253 "name": "BaseBdev3", 00:25:32.253 "uuid": "cd382e66-3266-4903-a0dc-62879c11cb03", 00:25:32.253 "is_configured": true, 00:25:32.253 "data_offset": 0, 00:25:32.253 "data_size": 65536 00:25:32.253 }, 00:25:32.253 { 00:25:32.253 "name": "BaseBdev4", 00:25:32.253 "uuid": "942ecde8-ceac-4833-864f-039b6cba7148", 00:25:32.253 "is_configured": true, 00:25:32.253 "data_offset": 0, 00:25:32.253 "data_size": 65536 00:25:32.253 } 00:25:32.253 ] 00:25:32.253 }' 00:25:32.253 04:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:32.253 04:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.824 04:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.824 04:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:33.084 04:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:25:33.084 04:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:33.342 [2024-07-23 04:20:41.867686] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:33.342 BaseBdev1 00:25:33.342 04:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:25:33.342 04:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:33.342 04:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:33.342 04:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:33.342 04:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:33.342 04:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:33.342 04:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:33.342 04:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:33.600 [ 00:25:33.600 { 00:25:33.600 "name": "BaseBdev1", 00:25:33.600 "aliases": [ 00:25:33.600 "65efd9c7-4112-4a42-9fd1-a248bdfae58b" 00:25:33.600 ], 00:25:33.600 "product_name": "Malloc disk", 00:25:33.600 "block_size": 512, 00:25:33.600 "num_blocks": 65536, 00:25:33.600 "uuid": "65efd9c7-4112-4a42-9fd1-a248bdfae58b", 00:25:33.600 "assigned_rate_limits": { 00:25:33.600 "rw_ios_per_sec": 0, 00:25:33.600 "rw_mbytes_per_sec": 0, 00:25:33.600 "r_mbytes_per_sec": 0, 00:25:33.600 "w_mbytes_per_sec": 0 00:25:33.600 }, 00:25:33.600 "claimed": true, 00:25:33.600 "claim_type": "exclusive_write", 00:25:33.600 "zoned": false, 00:25:33.600 "supported_io_types": { 00:25:33.600 "read": true, 00:25:33.600 "write": true, 00:25:33.600 "unmap": true, 00:25:33.600 "flush": true, 00:25:33.600 "reset": true, 00:25:33.600 "nvme_admin": false, 00:25:33.600 "nvme_io": false, 00:25:33.600 "nvme_io_md": false, 00:25:33.600 "write_zeroes": true, 00:25:33.600 "zcopy": true, 00:25:33.600 "get_zone_info": false, 00:25:33.600 "zone_management": false, 00:25:33.600 "zone_append": false, 00:25:33.600 "compare": false, 00:25:33.600 "compare_and_write": false, 00:25:33.600 "abort": true, 00:25:33.600 "seek_hole": false, 00:25:33.600 "seek_data": false, 00:25:33.600 "copy": true, 00:25:33.600 "nvme_iov_md": false 00:25:33.600 }, 00:25:33.600 "memory_domains": [ 00:25:33.600 { 00:25:33.600 "dma_device_id": "system", 00:25:33.600 "dma_device_type": 1 00:25:33.600 }, 00:25:33.600 { 00:25:33.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.600 "dma_device_type": 2 00:25:33.600 } 00:25:33.600 ], 00:25:33.600 "driver_specific": {} 00:25:33.600 } 00:25:33.600 ] 00:25:33.600 04:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:33.600 04:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:33.600 04:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:33.600 04:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:33.600 04:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:33.600 04:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:33.600 04:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:33.600 04:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:33.600 04:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:33.600 04:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:33.600 04:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:33.600 04:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.600 04:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:33.859 04:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:33.859 "name": "Existed_Raid", 00:25:33.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.859 "strip_size_kb": 0, 00:25:33.859 "state": "configuring", 00:25:33.859 "raid_level": "raid1", 00:25:33.859 "superblock": false, 00:25:33.859 "num_base_bdevs": 4, 00:25:33.859 "num_base_bdevs_discovered": 3, 00:25:33.859 "num_base_bdevs_operational": 4, 00:25:33.859 "base_bdevs_list": [ 00:25:33.859 { 00:25:33.859 "name": "BaseBdev1", 00:25:33.859 "uuid": "65efd9c7-4112-4a42-9fd1-a248bdfae58b", 00:25:33.859 "is_configured": true, 00:25:33.859 "data_offset": 0, 00:25:33.859 "data_size": 65536 00:25:33.859 }, 00:25:33.859 { 00:25:33.859 "name": null, 00:25:33.859 "uuid": "22e7507a-b760-4773-aef2-c4681e696993", 00:25:33.859 "is_configured": false, 00:25:33.859 "data_offset": 0, 00:25:33.859 "data_size": 65536 00:25:33.859 }, 00:25:33.859 { 00:25:33.859 "name": "BaseBdev3", 00:25:33.859 "uuid": "cd382e66-3266-4903-a0dc-62879c11cb03", 00:25:33.859 "is_configured": true, 00:25:33.859 "data_offset": 0, 00:25:33.859 "data_size": 65536 00:25:33.859 }, 00:25:33.859 { 00:25:33.859 "name": "BaseBdev4", 00:25:33.859 "uuid": "942ecde8-ceac-4833-864f-039b6cba7148", 00:25:33.859 "is_configured": true, 00:25:33.859 "data_offset": 0, 00:25:33.859 "data_size": 65536 00:25:33.859 } 00:25:33.859 ] 00:25:33.859 }' 00:25:33.859 04:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:33.859 04:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.425 04:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.425 04:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:34.683 04:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:25:34.683 04:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:25:34.941 [2024-07-23 04:20:43.552328] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:34.941 04:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:34.941 04:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:34.941 04:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:34.941 04:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:34.941 04:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:34.941 04:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:34.941 04:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:34.941 04:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:34.941 04:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:34.941 04:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:34.941 04:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.941 04:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:35.201 04:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:35.201 "name": "Existed_Raid", 00:25:35.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.201 "strip_size_kb": 0, 00:25:35.202 "state": "configuring", 00:25:35.202 "raid_level": "raid1", 00:25:35.202 "superblock": false, 00:25:35.202 "num_base_bdevs": 4, 00:25:35.202 "num_base_bdevs_discovered": 2, 00:25:35.202 "num_base_bdevs_operational": 4, 00:25:35.202 "base_bdevs_list": [ 00:25:35.202 { 00:25:35.202 "name": "BaseBdev1", 00:25:35.202 "uuid": "65efd9c7-4112-4a42-9fd1-a248bdfae58b", 00:25:35.202 "is_configured": true, 00:25:35.202 "data_offset": 0, 00:25:35.202 "data_size": 65536 00:25:35.202 }, 00:25:35.202 { 00:25:35.202 "name": null, 00:25:35.202 "uuid": "22e7507a-b760-4773-aef2-c4681e696993", 00:25:35.202 "is_configured": false, 00:25:35.202 "data_offset": 0, 00:25:35.202 "data_size": 65536 00:25:35.202 }, 00:25:35.202 { 00:25:35.202 "name": null, 00:25:35.202 "uuid": "cd382e66-3266-4903-a0dc-62879c11cb03", 00:25:35.202 "is_configured": false, 00:25:35.202 "data_offset": 0, 00:25:35.202 "data_size": 65536 00:25:35.202 }, 00:25:35.202 { 00:25:35.202 "name": "BaseBdev4", 00:25:35.202 "uuid": "942ecde8-ceac-4833-864f-039b6cba7148", 00:25:35.202 "is_configured": true, 00:25:35.202 "data_offset": 0, 00:25:35.202 "data_size": 65536 00:25:35.202 } 00:25:35.202 ] 00:25:35.202 }' 00:25:35.202 04:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:35.202 04:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.794 04:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.794 04:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:35.794 04:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:25:35.794 04:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:36.061 [2024-07-23 04:20:44.759608] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:36.061 04:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:36.061 04:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:36.061 04:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:36.061 04:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:36.061 04:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:36.061 04:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:36.061 04:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:36.061 04:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:36.061 04:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:36.061 04:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:36.061 04:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.061 04:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:36.320 04:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:36.320 "name": "Existed_Raid", 00:25:36.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.320 "strip_size_kb": 0, 00:25:36.320 "state": "configuring", 00:25:36.320 "raid_level": "raid1", 00:25:36.320 "superblock": false, 00:25:36.320 "num_base_bdevs": 4, 00:25:36.320 "num_base_bdevs_discovered": 3, 00:25:36.320 "num_base_bdevs_operational": 4, 00:25:36.320 "base_bdevs_list": [ 00:25:36.320 { 00:25:36.320 "name": "BaseBdev1", 00:25:36.320 "uuid": "65efd9c7-4112-4a42-9fd1-a248bdfae58b", 00:25:36.320 "is_configured": true, 00:25:36.320 "data_offset": 0, 00:25:36.320 "data_size": 65536 00:25:36.320 }, 00:25:36.320 { 00:25:36.320 "name": null, 00:25:36.320 "uuid": "22e7507a-b760-4773-aef2-c4681e696993", 00:25:36.320 "is_configured": false, 00:25:36.320 "data_offset": 0, 00:25:36.320 "data_size": 65536 00:25:36.320 }, 00:25:36.320 { 00:25:36.320 "name": "BaseBdev3", 00:25:36.320 "uuid": "cd382e66-3266-4903-a0dc-62879c11cb03", 00:25:36.320 "is_configured": true, 00:25:36.320 "data_offset": 0, 00:25:36.320 "data_size": 65536 00:25:36.320 }, 00:25:36.320 { 00:25:36.320 "name": "BaseBdev4", 00:25:36.320 "uuid": "942ecde8-ceac-4833-864f-039b6cba7148", 00:25:36.320 "is_configured": true, 00:25:36.320 "data_offset": 0, 00:25:36.320 "data_size": 65536 00:25:36.320 } 00:25:36.320 ] 00:25:36.320 }' 00:25:36.320 04:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:36.320 04:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.886 04:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:36.886 04:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.145 04:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:25:37.145 04:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:37.403 [2024-07-23 04:20:46.035329] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:37.661 04:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:37.661 04:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:37.661 04:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:37.661 04:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:37.661 04:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:37.661 04:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:37.661 04:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:37.661 04:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:37.661 04:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:37.661 04:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:37.661 04:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.661 04:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:37.661 04:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:37.661 "name": "Existed_Raid", 00:25:37.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.661 "strip_size_kb": 0, 00:25:37.661 "state": "configuring", 00:25:37.661 "raid_level": "raid1", 00:25:37.661 "superblock": false, 00:25:37.661 "num_base_bdevs": 4, 00:25:37.661 "num_base_bdevs_discovered": 2, 00:25:37.661 "num_base_bdevs_operational": 4, 00:25:37.661 "base_bdevs_list": [ 00:25:37.661 { 00:25:37.661 "name": null, 00:25:37.661 "uuid": "65efd9c7-4112-4a42-9fd1-a248bdfae58b", 00:25:37.661 "is_configured": false, 00:25:37.661 "data_offset": 0, 00:25:37.661 "data_size": 65536 00:25:37.661 }, 00:25:37.661 { 00:25:37.661 "name": null, 00:25:37.661 "uuid": "22e7507a-b760-4773-aef2-c4681e696993", 00:25:37.661 "is_configured": false, 00:25:37.661 "data_offset": 0, 00:25:37.661 "data_size": 65536 00:25:37.661 }, 00:25:37.661 { 00:25:37.661 "name": "BaseBdev3", 00:25:37.661 "uuid": "cd382e66-3266-4903-a0dc-62879c11cb03", 00:25:37.661 "is_configured": true, 00:25:37.661 "data_offset": 0, 00:25:37.661 "data_size": 65536 00:25:37.661 }, 00:25:37.661 { 00:25:37.661 "name": "BaseBdev4", 00:25:37.661 "uuid": "942ecde8-ceac-4833-864f-039b6cba7148", 00:25:37.661 "is_configured": true, 00:25:37.661 "data_offset": 0, 00:25:37.661 "data_size": 65536 00:25:37.661 } 00:25:37.661 ] 00:25:37.661 }' 00:25:37.661 04:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:37.661 04:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.227 04:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.227 04:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:38.484 04:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:25:38.484 04:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:39.049 [2024-07-23 04:20:47.715198] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:39.049 04:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:39.050 04:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:39.050 04:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:39.050 04:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:39.050 04:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:39.050 04:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:39.050 04:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:39.050 04:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:39.050 04:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:39.050 04:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:39.050 04:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.050 04:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:39.307 04:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:39.307 "name": "Existed_Raid", 00:25:39.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.307 "strip_size_kb": 0, 00:25:39.307 "state": "configuring", 00:25:39.307 "raid_level": "raid1", 00:25:39.307 "superblock": false, 00:25:39.307 "num_base_bdevs": 4, 00:25:39.307 "num_base_bdevs_discovered": 3, 00:25:39.307 "num_base_bdevs_operational": 4, 00:25:39.307 "base_bdevs_list": [ 00:25:39.307 { 00:25:39.307 "name": null, 00:25:39.307 "uuid": "65efd9c7-4112-4a42-9fd1-a248bdfae58b", 00:25:39.307 "is_configured": false, 00:25:39.307 "data_offset": 0, 00:25:39.307 "data_size": 65536 00:25:39.307 }, 00:25:39.307 { 00:25:39.307 "name": "BaseBdev2", 00:25:39.307 "uuid": "22e7507a-b760-4773-aef2-c4681e696993", 00:25:39.307 "is_configured": true, 00:25:39.307 "data_offset": 0, 00:25:39.307 "data_size": 65536 00:25:39.307 }, 00:25:39.307 { 00:25:39.307 "name": "BaseBdev3", 00:25:39.307 "uuid": "cd382e66-3266-4903-a0dc-62879c11cb03", 00:25:39.307 "is_configured": true, 00:25:39.307 "data_offset": 0, 00:25:39.307 "data_size": 65536 00:25:39.307 }, 00:25:39.307 { 00:25:39.307 "name": "BaseBdev4", 00:25:39.307 "uuid": "942ecde8-ceac-4833-864f-039b6cba7148", 00:25:39.307 "is_configured": true, 00:25:39.307 "data_offset": 0, 00:25:39.307 "data_size": 65536 00:25:39.307 } 00:25:39.307 ] 00:25:39.307 }' 00:25:39.307 04:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:39.307 04:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.873 04:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.873 04:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:40.131 04:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:25:40.131 04:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.131 04:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:40.389 04:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 65efd9c7-4112-4a42-9fd1-a248bdfae58b 00:25:40.647 [2024-07-23 04:20:49.257006] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:40.647 [2024-07-23 04:20:49.257056] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:25:40.647 [2024-07-23 04:20:49.257075] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:40.647 [2024-07-23 04:20:49.257421] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:25:40.647 [2024-07-23 04:20:49.257655] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:25:40.647 [2024-07-23 04:20:49.257669] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000042080 00:25:40.647 [2024-07-23 04:20:49.257966] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:40.647 NewBaseBdev 00:25:40.647 04:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:25:40.647 04:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:25:40.647 04:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:40.647 04:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:40.647 04:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:40.648 04:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:40.648 04:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:40.906 04:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:41.164 [ 00:25:41.164 { 00:25:41.164 "name": "NewBaseBdev", 00:25:41.164 "aliases": [ 00:25:41.164 "65efd9c7-4112-4a42-9fd1-a248bdfae58b" 00:25:41.164 ], 00:25:41.164 "product_name": "Malloc disk", 00:25:41.164 "block_size": 512, 00:25:41.164 "num_blocks": 65536, 00:25:41.164 "uuid": "65efd9c7-4112-4a42-9fd1-a248bdfae58b", 00:25:41.164 "assigned_rate_limits": { 00:25:41.164 "rw_ios_per_sec": 0, 00:25:41.164 "rw_mbytes_per_sec": 0, 00:25:41.164 "r_mbytes_per_sec": 0, 00:25:41.164 "w_mbytes_per_sec": 0 00:25:41.164 }, 00:25:41.164 "claimed": true, 00:25:41.164 "claim_type": "exclusive_write", 00:25:41.164 "zoned": false, 00:25:41.164 "supported_io_types": { 00:25:41.164 "read": true, 00:25:41.164 "write": true, 00:25:41.164 "unmap": true, 00:25:41.164 "flush": true, 00:25:41.164 "reset": true, 00:25:41.164 "nvme_admin": false, 00:25:41.164 "nvme_io": false, 00:25:41.164 "nvme_io_md": false, 00:25:41.164 "write_zeroes": true, 00:25:41.164 "zcopy": true, 00:25:41.164 "get_zone_info": false, 00:25:41.164 "zone_management": false, 00:25:41.164 "zone_append": false, 00:25:41.164 "compare": false, 00:25:41.164 "compare_and_write": false, 00:25:41.164 "abort": true, 00:25:41.164 "seek_hole": false, 00:25:41.164 "seek_data": false, 00:25:41.164 "copy": true, 00:25:41.164 "nvme_iov_md": false 00:25:41.164 }, 00:25:41.164 "memory_domains": [ 00:25:41.164 { 00:25:41.164 "dma_device_id": "system", 00:25:41.164 "dma_device_type": 1 00:25:41.164 }, 00:25:41.164 { 00:25:41.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.164 "dma_device_type": 2 00:25:41.164 } 00:25:41.164 ], 00:25:41.164 "driver_specific": {} 00:25:41.164 } 00:25:41.164 ] 00:25:41.164 04:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:41.164 04:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:41.164 04:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:41.164 04:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:41.164 04:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:41.164 04:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:41.164 04:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:41.164 04:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:41.164 04:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:41.164 04:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:41.164 04:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:41.164 04:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.164 04:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:41.423 04:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:41.423 "name": "Existed_Raid", 00:25:41.423 "uuid": "fa29cfcc-2437-4954-bc4a-52924a42707b", 00:25:41.423 "strip_size_kb": 0, 00:25:41.423 "state": "online", 00:25:41.423 "raid_level": "raid1", 00:25:41.423 "superblock": false, 00:25:41.423 "num_base_bdevs": 4, 00:25:41.423 "num_base_bdevs_discovered": 4, 00:25:41.423 "num_base_bdevs_operational": 4, 00:25:41.423 "base_bdevs_list": [ 00:25:41.423 { 00:25:41.423 "name": "NewBaseBdev", 00:25:41.423 "uuid": "65efd9c7-4112-4a42-9fd1-a248bdfae58b", 00:25:41.423 "is_configured": true, 00:25:41.423 "data_offset": 0, 00:25:41.423 "data_size": 65536 00:25:41.423 }, 00:25:41.423 { 00:25:41.423 "name": "BaseBdev2", 00:25:41.423 "uuid": "22e7507a-b760-4773-aef2-c4681e696993", 00:25:41.423 "is_configured": true, 00:25:41.423 "data_offset": 0, 00:25:41.423 "data_size": 65536 00:25:41.423 }, 00:25:41.423 { 00:25:41.423 "name": "BaseBdev3", 00:25:41.423 "uuid": "cd382e66-3266-4903-a0dc-62879c11cb03", 00:25:41.423 "is_configured": true, 00:25:41.423 "data_offset": 0, 00:25:41.423 "data_size": 65536 00:25:41.423 }, 00:25:41.423 { 00:25:41.423 "name": "BaseBdev4", 00:25:41.423 "uuid": "942ecde8-ceac-4833-864f-039b6cba7148", 00:25:41.423 "is_configured": true, 00:25:41.423 "data_offset": 0, 00:25:41.423 "data_size": 65536 00:25:41.423 } 00:25:41.423 ] 00:25:41.423 }' 00:25:41.423 04:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:41.423 04:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.990 04:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:25:41.990 04:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:41.990 04:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:41.990 04:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:41.990 04:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:41.990 04:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:41.990 04:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:41.990 04:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:41.990 [2024-07-23 04:20:50.741536] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:41.990 04:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:41.990 "name": "Existed_Raid", 00:25:41.990 "aliases": [ 00:25:41.990 "fa29cfcc-2437-4954-bc4a-52924a42707b" 00:25:41.990 ], 00:25:41.990 "product_name": "Raid Volume", 00:25:41.990 "block_size": 512, 00:25:41.990 "num_blocks": 65536, 00:25:41.990 "uuid": "fa29cfcc-2437-4954-bc4a-52924a42707b", 00:25:41.990 "assigned_rate_limits": { 00:25:41.990 "rw_ios_per_sec": 0, 00:25:41.990 "rw_mbytes_per_sec": 0, 00:25:41.990 "r_mbytes_per_sec": 0, 00:25:41.990 "w_mbytes_per_sec": 0 00:25:41.990 }, 00:25:41.990 "claimed": false, 00:25:41.990 "zoned": false, 00:25:41.990 "supported_io_types": { 00:25:41.990 "read": true, 00:25:41.990 "write": true, 00:25:41.990 "unmap": false, 00:25:41.990 "flush": false, 00:25:41.990 "reset": true, 00:25:41.990 "nvme_admin": false, 00:25:41.990 "nvme_io": false, 00:25:41.990 "nvme_io_md": false, 00:25:41.990 "write_zeroes": true, 00:25:41.990 "zcopy": false, 00:25:41.990 "get_zone_info": false, 00:25:41.990 "zone_management": false, 00:25:41.990 "zone_append": false, 00:25:41.990 "compare": false, 00:25:41.990 "compare_and_write": false, 00:25:41.990 "abort": false, 00:25:41.990 "seek_hole": false, 00:25:41.990 "seek_data": false, 00:25:41.990 "copy": false, 00:25:41.990 "nvme_iov_md": false 00:25:41.990 }, 00:25:41.990 "memory_domains": [ 00:25:41.990 { 00:25:41.990 "dma_device_id": "system", 00:25:41.990 "dma_device_type": 1 00:25:41.990 }, 00:25:41.990 { 00:25:41.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.990 "dma_device_type": 2 00:25:41.990 }, 00:25:41.990 { 00:25:41.990 "dma_device_id": "system", 00:25:41.990 "dma_device_type": 1 00:25:41.990 }, 00:25:41.990 { 00:25:41.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.990 "dma_device_type": 2 00:25:41.990 }, 00:25:41.990 { 00:25:41.990 "dma_device_id": "system", 00:25:41.990 "dma_device_type": 1 00:25:41.990 }, 00:25:41.990 { 00:25:41.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.990 "dma_device_type": 2 00:25:41.990 }, 00:25:41.990 { 00:25:41.990 "dma_device_id": "system", 00:25:41.990 "dma_device_type": 1 00:25:41.990 }, 00:25:41.990 { 00:25:41.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.990 "dma_device_type": 2 00:25:41.990 } 00:25:41.990 ], 00:25:41.990 "driver_specific": { 00:25:41.990 "raid": { 00:25:41.990 "uuid": "fa29cfcc-2437-4954-bc4a-52924a42707b", 00:25:41.990 "strip_size_kb": 0, 00:25:41.990 "state": "online", 00:25:41.990 "raid_level": "raid1", 00:25:41.990 "superblock": false, 00:25:41.990 "num_base_bdevs": 4, 00:25:41.990 "num_base_bdevs_discovered": 4, 00:25:41.990 "num_base_bdevs_operational": 4, 00:25:41.990 "base_bdevs_list": [ 00:25:41.990 { 00:25:41.990 "name": "NewBaseBdev", 00:25:41.990 "uuid": "65efd9c7-4112-4a42-9fd1-a248bdfae58b", 00:25:41.990 "is_configured": true, 00:25:41.990 "data_offset": 0, 00:25:41.990 "data_size": 65536 00:25:41.990 }, 00:25:41.990 { 00:25:41.990 "name": "BaseBdev2", 00:25:41.990 "uuid": "22e7507a-b760-4773-aef2-c4681e696993", 00:25:41.990 "is_configured": true, 00:25:41.990 "data_offset": 0, 00:25:41.990 "data_size": 65536 00:25:41.990 }, 00:25:41.990 { 00:25:41.990 "name": "BaseBdev3", 00:25:41.990 "uuid": "cd382e66-3266-4903-a0dc-62879c11cb03", 00:25:41.990 "is_configured": true, 00:25:41.990 "data_offset": 0, 00:25:41.990 "data_size": 65536 00:25:41.990 }, 00:25:41.990 { 00:25:41.990 "name": "BaseBdev4", 00:25:41.990 "uuid": "942ecde8-ceac-4833-864f-039b6cba7148", 00:25:41.990 "is_configured": true, 00:25:41.990 "data_offset": 0, 00:25:41.990 "data_size": 65536 00:25:41.990 } 00:25:41.990 ] 00:25:41.990 } 00:25:41.990 } 00:25:41.990 }' 00:25:41.990 04:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:42.248 04:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:25:42.249 BaseBdev2 00:25:42.249 BaseBdev3 00:25:42.249 BaseBdev4' 00:25:42.249 04:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:42.249 04:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:25:42.249 04:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:42.507 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:42.507 "name": "NewBaseBdev", 00:25:42.507 "aliases": [ 00:25:42.507 "65efd9c7-4112-4a42-9fd1-a248bdfae58b" 00:25:42.507 ], 00:25:42.507 "product_name": "Malloc disk", 00:25:42.507 "block_size": 512, 00:25:42.507 "num_blocks": 65536, 00:25:42.507 "uuid": "65efd9c7-4112-4a42-9fd1-a248bdfae58b", 00:25:42.507 "assigned_rate_limits": { 00:25:42.507 "rw_ios_per_sec": 0, 00:25:42.507 "rw_mbytes_per_sec": 0, 00:25:42.507 "r_mbytes_per_sec": 0, 00:25:42.507 "w_mbytes_per_sec": 0 00:25:42.507 }, 00:25:42.507 "claimed": true, 00:25:42.507 "claim_type": "exclusive_write", 00:25:42.507 "zoned": false, 00:25:42.507 "supported_io_types": { 00:25:42.507 "read": true, 00:25:42.507 "write": true, 00:25:42.507 "unmap": true, 00:25:42.507 "flush": true, 00:25:42.507 "reset": true, 00:25:42.507 "nvme_admin": false, 00:25:42.507 "nvme_io": false, 00:25:42.507 "nvme_io_md": false, 00:25:42.507 "write_zeroes": true, 00:25:42.507 "zcopy": true, 00:25:42.507 "get_zone_info": false, 00:25:42.507 "zone_management": false, 00:25:42.507 "zone_append": false, 00:25:42.507 "compare": false, 00:25:42.507 "compare_and_write": false, 00:25:42.507 "abort": true, 00:25:42.507 "seek_hole": false, 00:25:42.507 "seek_data": false, 00:25:42.507 "copy": true, 00:25:42.507 "nvme_iov_md": false 00:25:42.507 }, 00:25:42.507 "memory_domains": [ 00:25:42.507 { 00:25:42.507 "dma_device_id": "system", 00:25:42.507 "dma_device_type": 1 00:25:42.507 }, 00:25:42.507 { 00:25:42.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.507 "dma_device_type": 2 00:25:42.507 } 00:25:42.507 ], 00:25:42.507 "driver_specific": {} 00:25:42.507 }' 00:25:42.507 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:42.507 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:42.507 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:42.507 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:42.507 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:42.507 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:42.507 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:42.507 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:42.507 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:42.507 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:42.765 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:42.765 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:42.765 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:42.765 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:42.765 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:43.023 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:43.023 "name": "BaseBdev2", 00:25:43.023 "aliases": [ 00:25:43.023 "22e7507a-b760-4773-aef2-c4681e696993" 00:25:43.023 ], 00:25:43.023 "product_name": "Malloc disk", 00:25:43.023 "block_size": 512, 00:25:43.023 "num_blocks": 65536, 00:25:43.023 "uuid": "22e7507a-b760-4773-aef2-c4681e696993", 00:25:43.023 "assigned_rate_limits": { 00:25:43.024 "rw_ios_per_sec": 0, 00:25:43.024 "rw_mbytes_per_sec": 0, 00:25:43.024 "r_mbytes_per_sec": 0, 00:25:43.024 "w_mbytes_per_sec": 0 00:25:43.024 }, 00:25:43.024 "claimed": true, 00:25:43.024 "claim_type": "exclusive_write", 00:25:43.024 "zoned": false, 00:25:43.024 "supported_io_types": { 00:25:43.024 "read": true, 00:25:43.024 "write": true, 00:25:43.024 "unmap": true, 00:25:43.024 "flush": true, 00:25:43.024 "reset": true, 00:25:43.024 "nvme_admin": false, 00:25:43.024 "nvme_io": false, 00:25:43.024 "nvme_io_md": false, 00:25:43.024 "write_zeroes": true, 00:25:43.024 "zcopy": true, 00:25:43.024 "get_zone_info": false, 00:25:43.024 "zone_management": false, 00:25:43.024 "zone_append": false, 00:25:43.024 "compare": false, 00:25:43.024 "compare_and_write": false, 00:25:43.024 "abort": true, 00:25:43.024 "seek_hole": false, 00:25:43.024 "seek_data": false, 00:25:43.024 "copy": true, 00:25:43.024 "nvme_iov_md": false 00:25:43.024 }, 00:25:43.024 "memory_domains": [ 00:25:43.024 { 00:25:43.024 "dma_device_id": "system", 00:25:43.024 "dma_device_type": 1 00:25:43.024 }, 00:25:43.024 { 00:25:43.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:43.024 "dma_device_type": 2 00:25:43.024 } 00:25:43.024 ], 00:25:43.024 "driver_specific": {} 00:25:43.024 }' 00:25:43.024 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:43.024 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:43.024 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:43.024 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:43.024 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:43.024 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:43.024 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:43.024 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:43.282 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:43.282 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:43.282 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:43.282 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:43.282 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:43.282 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:43.282 04:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:43.540 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:43.540 "name": "BaseBdev3", 00:25:43.540 "aliases": [ 00:25:43.540 "cd382e66-3266-4903-a0dc-62879c11cb03" 00:25:43.540 ], 00:25:43.540 "product_name": "Malloc disk", 00:25:43.540 "block_size": 512, 00:25:43.540 "num_blocks": 65536, 00:25:43.540 "uuid": "cd382e66-3266-4903-a0dc-62879c11cb03", 00:25:43.540 "assigned_rate_limits": { 00:25:43.540 "rw_ios_per_sec": 0, 00:25:43.540 "rw_mbytes_per_sec": 0, 00:25:43.540 "r_mbytes_per_sec": 0, 00:25:43.540 "w_mbytes_per_sec": 0 00:25:43.540 }, 00:25:43.540 "claimed": true, 00:25:43.540 "claim_type": "exclusive_write", 00:25:43.540 "zoned": false, 00:25:43.540 "supported_io_types": { 00:25:43.540 "read": true, 00:25:43.540 "write": true, 00:25:43.540 "unmap": true, 00:25:43.540 "flush": true, 00:25:43.540 "reset": true, 00:25:43.540 "nvme_admin": false, 00:25:43.540 "nvme_io": false, 00:25:43.540 "nvme_io_md": false, 00:25:43.540 "write_zeroes": true, 00:25:43.540 "zcopy": true, 00:25:43.540 "get_zone_info": false, 00:25:43.540 "zone_management": false, 00:25:43.540 "zone_append": false, 00:25:43.540 "compare": false, 00:25:43.540 "compare_and_write": false, 00:25:43.540 "abort": true, 00:25:43.540 "seek_hole": false, 00:25:43.540 "seek_data": false, 00:25:43.540 "copy": true, 00:25:43.540 "nvme_iov_md": false 00:25:43.540 }, 00:25:43.540 "memory_domains": [ 00:25:43.540 { 00:25:43.540 "dma_device_id": "system", 00:25:43.540 "dma_device_type": 1 00:25:43.540 }, 00:25:43.540 { 00:25:43.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:43.540 "dma_device_type": 2 00:25:43.540 } 00:25:43.540 ], 00:25:43.540 "driver_specific": {} 00:25:43.540 }' 00:25:43.540 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:43.540 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:43.540 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:43.540 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:43.540 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:43.540 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:43.540 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:43.799 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:43.799 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:43.799 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:43.799 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:43.799 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:43.799 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:43.799 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:43.799 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:44.057 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:44.057 "name": "BaseBdev4", 00:25:44.057 "aliases": [ 00:25:44.057 "942ecde8-ceac-4833-864f-039b6cba7148" 00:25:44.057 ], 00:25:44.057 "product_name": "Malloc disk", 00:25:44.057 "block_size": 512, 00:25:44.057 "num_blocks": 65536, 00:25:44.057 "uuid": "942ecde8-ceac-4833-864f-039b6cba7148", 00:25:44.057 "assigned_rate_limits": { 00:25:44.057 "rw_ios_per_sec": 0, 00:25:44.057 "rw_mbytes_per_sec": 0, 00:25:44.057 "r_mbytes_per_sec": 0, 00:25:44.057 "w_mbytes_per_sec": 0 00:25:44.057 }, 00:25:44.057 "claimed": true, 00:25:44.057 "claim_type": "exclusive_write", 00:25:44.057 "zoned": false, 00:25:44.057 "supported_io_types": { 00:25:44.057 "read": true, 00:25:44.057 "write": true, 00:25:44.057 "unmap": true, 00:25:44.057 "flush": true, 00:25:44.057 "reset": true, 00:25:44.057 "nvme_admin": false, 00:25:44.057 "nvme_io": false, 00:25:44.057 "nvme_io_md": false, 00:25:44.057 "write_zeroes": true, 00:25:44.057 "zcopy": true, 00:25:44.057 "get_zone_info": false, 00:25:44.057 "zone_management": false, 00:25:44.057 "zone_append": false, 00:25:44.057 "compare": false, 00:25:44.057 "compare_and_write": false, 00:25:44.057 "abort": true, 00:25:44.057 "seek_hole": false, 00:25:44.057 "seek_data": false, 00:25:44.057 "copy": true, 00:25:44.057 "nvme_iov_md": false 00:25:44.057 }, 00:25:44.057 "memory_domains": [ 00:25:44.057 { 00:25:44.057 "dma_device_id": "system", 00:25:44.057 "dma_device_type": 1 00:25:44.057 }, 00:25:44.057 { 00:25:44.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.057 "dma_device_type": 2 00:25:44.057 } 00:25:44.057 ], 00:25:44.057 "driver_specific": {} 00:25:44.057 }' 00:25:44.057 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:44.057 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:44.057 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:44.057 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:44.057 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:44.316 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:44.316 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:44.316 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:44.316 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:44.316 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:44.316 04:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:44.316 04:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:44.316 04:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:44.574 [2024-07-23 04:20:53.235837] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:44.574 [2024-07-23 04:20:53.235872] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:44.574 [2024-07-23 04:20:53.235969] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:44.574 [2024-07-23 04:20:53.236326] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:44.574 [2024-07-23 04:20:53.236349] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name Existed_Raid, state offline 00:25:44.574 04:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2738133 00:25:44.574 04:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2738133 ']' 00:25:44.574 04:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2738133 00:25:44.574 04:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:25:44.574 04:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:44.574 04:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2738133 00:25:44.574 04:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:44.575 04:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:44.575 04:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2738133' 00:25:44.575 killing process with pid 2738133 00:25:44.575 04:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2738133 00:25:44.575 [2024-07-23 04:20:53.311485] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:44.575 04:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2738133 00:25:45.140 [2024-07-23 04:20:53.789734] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:25:47.042 00:25:47.042 real 0m34.604s 00:25:47.042 user 1m0.658s 00:25:47.042 sys 0m5.868s 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:47.042 ************************************ 00:25:47.042 END TEST raid_state_function_test 00:25:47.042 ************************************ 00:25:47.042 04:20:55 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:47.042 04:20:55 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:25:47.042 04:20:55 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:47.042 04:20:55 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:47.042 04:20:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:47.042 ************************************ 00:25:47.042 START TEST raid_state_function_test_sb 00:25:47.042 ************************************ 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 true 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2744404 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2744404' 00:25:47.042 Process raid pid: 2744404 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2744404 /var/tmp/spdk-raid.sock 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2744404 ']' 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:47.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:47.042 04:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:47.042 [2024-07-23 04:20:55.686409] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:47.042 [2024-07-23 04:20:55.686524] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:47.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.042 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:47.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.042 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:47.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.042 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:47.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.042 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:47.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:47.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.043 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:47.302 [2024-07-23 04:20:55.902033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.561 [2024-07-23 04:20:56.195485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.820 [2024-07-23 04:20:56.542916] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:47.820 [2024-07-23 04:20:56.542952] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:48.086 04:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:48.086 04:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:25:48.086 04:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:48.364 [2024-07-23 04:20:56.940063] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:48.364 [2024-07-23 04:20:56.940119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:48.364 [2024-07-23 04:20:56.940134] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:48.364 [2024-07-23 04:20:56.940159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:48.364 [2024-07-23 04:20:56.940170] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:48.364 [2024-07-23 04:20:56.940187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:48.364 [2024-07-23 04:20:56.940198] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:48.364 [2024-07-23 04:20:56.940214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:48.364 04:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:48.364 04:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:48.364 04:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:48.364 04:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:48.364 04:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:48.364 04:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:48.364 04:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:48.364 04:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:48.364 04:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:48.364 04:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:48.364 04:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.364 04:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:48.622 04:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:48.622 "name": "Existed_Raid", 00:25:48.622 "uuid": "c43fa44f-dc46-48ec-a608-4ce0b3528864", 00:25:48.622 "strip_size_kb": 0, 00:25:48.622 "state": "configuring", 00:25:48.622 "raid_level": "raid1", 00:25:48.622 "superblock": true, 00:25:48.622 "num_base_bdevs": 4, 00:25:48.622 "num_base_bdevs_discovered": 0, 00:25:48.622 "num_base_bdevs_operational": 4, 00:25:48.622 "base_bdevs_list": [ 00:25:48.622 { 00:25:48.622 "name": "BaseBdev1", 00:25:48.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.622 "is_configured": false, 00:25:48.622 "data_offset": 0, 00:25:48.622 "data_size": 0 00:25:48.622 }, 00:25:48.622 { 00:25:48.622 "name": "BaseBdev2", 00:25:48.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.622 "is_configured": false, 00:25:48.622 "data_offset": 0, 00:25:48.622 "data_size": 0 00:25:48.622 }, 00:25:48.622 { 00:25:48.622 "name": "BaseBdev3", 00:25:48.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.622 "is_configured": false, 00:25:48.622 "data_offset": 0, 00:25:48.622 "data_size": 0 00:25:48.622 }, 00:25:48.622 { 00:25:48.622 "name": "BaseBdev4", 00:25:48.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.622 "is_configured": false, 00:25:48.622 "data_offset": 0, 00:25:48.622 "data_size": 0 00:25:48.622 } 00:25:48.622 ] 00:25:48.622 }' 00:25:48.622 04:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:48.622 04:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.188 04:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:49.188 [2024-07-23 04:20:57.958755] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:49.188 [2024-07-23 04:20:57.958798] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:25:49.447 04:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:49.447 [2024-07-23 04:20:58.135309] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:49.447 [2024-07-23 04:20:58.135355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:49.447 [2024-07-23 04:20:58.135369] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:49.447 [2024-07-23 04:20:58.135392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:49.447 [2024-07-23 04:20:58.135404] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:49.447 [2024-07-23 04:20:58.135420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:49.447 [2024-07-23 04:20:58.135431] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:49.447 [2024-07-23 04:20:58.135446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:49.447 04:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:49.706 [2024-07-23 04:20:58.422011] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:49.706 BaseBdev1 00:25:49.706 04:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:49.706 04:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:49.706 04:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:49.706 04:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:49.706 04:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:49.706 04:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:49.706 04:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:49.965 04:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:50.224 [ 00:25:50.224 { 00:25:50.224 "name": "BaseBdev1", 00:25:50.224 "aliases": [ 00:25:50.224 "01656b5c-ab12-45b4-9928-ef19f1aaf89c" 00:25:50.224 ], 00:25:50.224 "product_name": "Malloc disk", 00:25:50.224 "block_size": 512, 00:25:50.224 "num_blocks": 65536, 00:25:50.224 "uuid": "01656b5c-ab12-45b4-9928-ef19f1aaf89c", 00:25:50.224 "assigned_rate_limits": { 00:25:50.224 "rw_ios_per_sec": 0, 00:25:50.224 "rw_mbytes_per_sec": 0, 00:25:50.224 "r_mbytes_per_sec": 0, 00:25:50.224 "w_mbytes_per_sec": 0 00:25:50.224 }, 00:25:50.224 "claimed": true, 00:25:50.224 "claim_type": "exclusive_write", 00:25:50.224 "zoned": false, 00:25:50.224 "supported_io_types": { 00:25:50.224 "read": true, 00:25:50.224 "write": true, 00:25:50.224 "unmap": true, 00:25:50.224 "flush": true, 00:25:50.224 "reset": true, 00:25:50.224 "nvme_admin": false, 00:25:50.224 "nvme_io": false, 00:25:50.224 "nvme_io_md": false, 00:25:50.224 "write_zeroes": true, 00:25:50.224 "zcopy": true, 00:25:50.224 "get_zone_info": false, 00:25:50.224 "zone_management": false, 00:25:50.224 "zone_append": false, 00:25:50.224 "compare": false, 00:25:50.224 "compare_and_write": false, 00:25:50.224 "abort": true, 00:25:50.224 "seek_hole": false, 00:25:50.224 "seek_data": false, 00:25:50.224 "copy": true, 00:25:50.224 "nvme_iov_md": false 00:25:50.224 }, 00:25:50.224 "memory_domains": [ 00:25:50.224 { 00:25:50.224 "dma_device_id": "system", 00:25:50.224 "dma_device_type": 1 00:25:50.224 }, 00:25:50.224 { 00:25:50.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.224 "dma_device_type": 2 00:25:50.224 } 00:25:50.224 ], 00:25:50.224 "driver_specific": {} 00:25:50.224 } 00:25:50.224 ] 00:25:50.224 04:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:50.224 04:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:50.224 04:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:50.224 04:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:50.224 04:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:50.224 04:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:50.224 04:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:50.224 04:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:50.224 04:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:50.224 04:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:50.224 04:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:50.224 04:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.224 04:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:50.484 04:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:50.484 "name": "Existed_Raid", 00:25:50.484 "uuid": "3f384b36-8b78-4993-8ba9-9e6542d2020d", 00:25:50.484 "strip_size_kb": 0, 00:25:50.484 "state": "configuring", 00:25:50.484 "raid_level": "raid1", 00:25:50.484 "superblock": true, 00:25:50.484 "num_base_bdevs": 4, 00:25:50.484 "num_base_bdevs_discovered": 1, 00:25:50.484 "num_base_bdevs_operational": 4, 00:25:50.484 "base_bdevs_list": [ 00:25:50.484 { 00:25:50.484 "name": "BaseBdev1", 00:25:50.484 "uuid": "01656b5c-ab12-45b4-9928-ef19f1aaf89c", 00:25:50.484 "is_configured": true, 00:25:50.484 "data_offset": 2048, 00:25:50.484 "data_size": 63488 00:25:50.484 }, 00:25:50.484 { 00:25:50.484 "name": "BaseBdev2", 00:25:50.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.484 "is_configured": false, 00:25:50.484 "data_offset": 0, 00:25:50.484 "data_size": 0 00:25:50.484 }, 00:25:50.484 { 00:25:50.484 "name": "BaseBdev3", 00:25:50.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.484 "is_configured": false, 00:25:50.484 "data_offset": 0, 00:25:50.484 "data_size": 0 00:25:50.484 }, 00:25:50.484 { 00:25:50.484 "name": "BaseBdev4", 00:25:50.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.484 "is_configured": false, 00:25:50.484 "data_offset": 0, 00:25:50.484 "data_size": 0 00:25:50.484 } 00:25:50.484 ] 00:25:50.484 }' 00:25:50.484 04:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:50.484 04:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.052 04:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:51.052 [2024-07-23 04:20:59.817843] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:51.052 [2024-07-23 04:20:59.817900] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:25:51.052 04:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:51.311 [2024-07-23 04:21:00.050569] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:51.311 [2024-07-23 04:21:00.052880] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:51.311 [2024-07-23 04:21:00.052925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:51.311 [2024-07-23 04:21:00.052939] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:51.311 [2024-07-23 04:21:00.052955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:51.311 [2024-07-23 04:21:00.052967] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:51.311 [2024-07-23 04:21:00.052986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:51.311 04:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:51.311 04:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:51.311 04:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:51.311 04:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:51.311 04:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:51.311 04:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:51.311 04:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:51.311 04:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:51.311 04:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:51.311 04:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:51.311 04:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:51.311 04:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:51.312 04:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.312 04:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:51.571 04:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:51.571 "name": "Existed_Raid", 00:25:51.571 "uuid": "dea455fa-c943-4246-a25b-4c4a8051c4d2", 00:25:51.571 "strip_size_kb": 0, 00:25:51.571 "state": "configuring", 00:25:51.571 "raid_level": "raid1", 00:25:51.571 "superblock": true, 00:25:51.571 "num_base_bdevs": 4, 00:25:51.571 "num_base_bdevs_discovered": 1, 00:25:51.571 "num_base_bdevs_operational": 4, 00:25:51.571 "base_bdevs_list": [ 00:25:51.571 { 00:25:51.571 "name": "BaseBdev1", 00:25:51.571 "uuid": "01656b5c-ab12-45b4-9928-ef19f1aaf89c", 00:25:51.571 "is_configured": true, 00:25:51.571 "data_offset": 2048, 00:25:51.571 "data_size": 63488 00:25:51.571 }, 00:25:51.571 { 00:25:51.571 "name": "BaseBdev2", 00:25:51.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.571 "is_configured": false, 00:25:51.571 "data_offset": 0, 00:25:51.571 "data_size": 0 00:25:51.571 }, 00:25:51.571 { 00:25:51.571 "name": "BaseBdev3", 00:25:51.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.571 "is_configured": false, 00:25:51.571 "data_offset": 0, 00:25:51.571 "data_size": 0 00:25:51.571 }, 00:25:51.571 { 00:25:51.571 "name": "BaseBdev4", 00:25:51.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.571 "is_configured": false, 00:25:51.571 "data_offset": 0, 00:25:51.571 "data_size": 0 00:25:51.571 } 00:25:51.571 ] 00:25:51.571 }' 00:25:51.571 04:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:51.571 04:21:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.139 04:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:52.399 [2024-07-23 04:21:01.074756] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:52.399 BaseBdev2 00:25:52.399 04:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:52.399 04:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:52.399 04:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:52.399 04:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:52.399 04:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:52.399 04:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:52.399 04:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:52.658 04:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:52.917 [ 00:25:52.917 { 00:25:52.917 "name": "BaseBdev2", 00:25:52.917 "aliases": [ 00:25:52.917 "d474be1a-a8a3-4a40-88c2-2d319be36978" 00:25:52.917 ], 00:25:52.917 "product_name": "Malloc disk", 00:25:52.917 "block_size": 512, 00:25:52.917 "num_blocks": 65536, 00:25:52.917 "uuid": "d474be1a-a8a3-4a40-88c2-2d319be36978", 00:25:52.917 "assigned_rate_limits": { 00:25:52.917 "rw_ios_per_sec": 0, 00:25:52.917 "rw_mbytes_per_sec": 0, 00:25:52.917 "r_mbytes_per_sec": 0, 00:25:52.917 "w_mbytes_per_sec": 0 00:25:52.917 }, 00:25:52.917 "claimed": true, 00:25:52.917 "claim_type": "exclusive_write", 00:25:52.917 "zoned": false, 00:25:52.917 "supported_io_types": { 00:25:52.917 "read": true, 00:25:52.917 "write": true, 00:25:52.917 "unmap": true, 00:25:52.917 "flush": true, 00:25:52.917 "reset": true, 00:25:52.917 "nvme_admin": false, 00:25:52.917 "nvme_io": false, 00:25:52.917 "nvme_io_md": false, 00:25:52.917 "write_zeroes": true, 00:25:52.917 "zcopy": true, 00:25:52.917 "get_zone_info": false, 00:25:52.917 "zone_management": false, 00:25:52.917 "zone_append": false, 00:25:52.917 "compare": false, 00:25:52.917 "compare_and_write": false, 00:25:52.917 "abort": true, 00:25:52.917 "seek_hole": false, 00:25:52.917 "seek_data": false, 00:25:52.917 "copy": true, 00:25:52.917 "nvme_iov_md": false 00:25:52.917 }, 00:25:52.917 "memory_domains": [ 00:25:52.917 { 00:25:52.917 "dma_device_id": "system", 00:25:52.917 "dma_device_type": 1 00:25:52.917 }, 00:25:52.917 { 00:25:52.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:52.917 "dma_device_type": 2 00:25:52.917 } 00:25:52.917 ], 00:25:52.917 "driver_specific": {} 00:25:52.917 } 00:25:52.917 ] 00:25:52.917 04:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:52.917 04:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:52.917 04:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:52.917 04:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:52.917 04:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:52.918 04:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:52.918 04:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:52.918 04:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:52.918 04:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:52.918 04:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:52.918 04:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:52.918 04:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:52.918 04:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:52.918 04:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.918 04:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:53.177 04:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:53.177 "name": "Existed_Raid", 00:25:53.177 "uuid": "dea455fa-c943-4246-a25b-4c4a8051c4d2", 00:25:53.177 "strip_size_kb": 0, 00:25:53.177 "state": "configuring", 00:25:53.177 "raid_level": "raid1", 00:25:53.177 "superblock": true, 00:25:53.177 "num_base_bdevs": 4, 00:25:53.177 "num_base_bdevs_discovered": 2, 00:25:53.177 "num_base_bdevs_operational": 4, 00:25:53.177 "base_bdevs_list": [ 00:25:53.177 { 00:25:53.177 "name": "BaseBdev1", 00:25:53.177 "uuid": "01656b5c-ab12-45b4-9928-ef19f1aaf89c", 00:25:53.177 "is_configured": true, 00:25:53.177 "data_offset": 2048, 00:25:53.177 "data_size": 63488 00:25:53.177 }, 00:25:53.177 { 00:25:53.177 "name": "BaseBdev2", 00:25:53.177 "uuid": "d474be1a-a8a3-4a40-88c2-2d319be36978", 00:25:53.177 "is_configured": true, 00:25:53.177 "data_offset": 2048, 00:25:53.177 "data_size": 63488 00:25:53.177 }, 00:25:53.177 { 00:25:53.177 "name": "BaseBdev3", 00:25:53.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.177 "is_configured": false, 00:25:53.177 "data_offset": 0, 00:25:53.177 "data_size": 0 00:25:53.177 }, 00:25:53.177 { 00:25:53.177 "name": "BaseBdev4", 00:25:53.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.177 "is_configured": false, 00:25:53.177 "data_offset": 0, 00:25:53.177 "data_size": 0 00:25:53.177 } 00:25:53.177 ] 00:25:53.177 }' 00:25:53.177 04:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:53.177 04:21:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.746 04:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:54.005 [2024-07-23 04:21:02.609953] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:54.005 BaseBdev3 00:25:54.005 04:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:25:54.005 04:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:54.005 04:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:54.005 04:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:54.005 04:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:54.005 04:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:54.005 04:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:54.264 04:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:54.523 [ 00:25:54.523 { 00:25:54.523 "name": "BaseBdev3", 00:25:54.523 "aliases": [ 00:25:54.523 "7ca3d8ad-9489-4102-a4b0-7d95a3c451ff" 00:25:54.523 ], 00:25:54.523 "product_name": "Malloc disk", 00:25:54.523 "block_size": 512, 00:25:54.523 "num_blocks": 65536, 00:25:54.523 "uuid": "7ca3d8ad-9489-4102-a4b0-7d95a3c451ff", 00:25:54.523 "assigned_rate_limits": { 00:25:54.523 "rw_ios_per_sec": 0, 00:25:54.523 "rw_mbytes_per_sec": 0, 00:25:54.523 "r_mbytes_per_sec": 0, 00:25:54.523 "w_mbytes_per_sec": 0 00:25:54.523 }, 00:25:54.523 "claimed": true, 00:25:54.523 "claim_type": "exclusive_write", 00:25:54.523 "zoned": false, 00:25:54.523 "supported_io_types": { 00:25:54.523 "read": true, 00:25:54.523 "write": true, 00:25:54.523 "unmap": true, 00:25:54.523 "flush": true, 00:25:54.523 "reset": true, 00:25:54.523 "nvme_admin": false, 00:25:54.523 "nvme_io": false, 00:25:54.523 "nvme_io_md": false, 00:25:54.523 "write_zeroes": true, 00:25:54.523 "zcopy": true, 00:25:54.523 "get_zone_info": false, 00:25:54.523 "zone_management": false, 00:25:54.523 "zone_append": false, 00:25:54.523 "compare": false, 00:25:54.523 "compare_and_write": false, 00:25:54.523 "abort": true, 00:25:54.523 "seek_hole": false, 00:25:54.523 "seek_data": false, 00:25:54.523 "copy": true, 00:25:54.523 "nvme_iov_md": false 00:25:54.523 }, 00:25:54.523 "memory_domains": [ 00:25:54.523 { 00:25:54.523 "dma_device_id": "system", 00:25:54.523 "dma_device_type": 1 00:25:54.523 }, 00:25:54.523 { 00:25:54.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:54.523 "dma_device_type": 2 00:25:54.523 } 00:25:54.523 ], 00:25:54.523 "driver_specific": {} 00:25:54.523 } 00:25:54.523 ] 00:25:54.523 04:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:54.523 04:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:54.523 04:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:54.523 04:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:54.523 04:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:54.523 04:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:54.523 04:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:54.523 04:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:54.523 04:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:54.523 04:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:54.523 04:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:54.523 04:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:54.523 04:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:54.523 04:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.523 04:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:54.523 04:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:54.523 "name": "Existed_Raid", 00:25:54.523 "uuid": "dea455fa-c943-4246-a25b-4c4a8051c4d2", 00:25:54.523 "strip_size_kb": 0, 00:25:54.523 "state": "configuring", 00:25:54.523 "raid_level": "raid1", 00:25:54.523 "superblock": true, 00:25:54.523 "num_base_bdevs": 4, 00:25:54.523 "num_base_bdevs_discovered": 3, 00:25:54.523 "num_base_bdevs_operational": 4, 00:25:54.523 "base_bdevs_list": [ 00:25:54.523 { 00:25:54.523 "name": "BaseBdev1", 00:25:54.523 "uuid": "01656b5c-ab12-45b4-9928-ef19f1aaf89c", 00:25:54.523 "is_configured": true, 00:25:54.523 "data_offset": 2048, 00:25:54.523 "data_size": 63488 00:25:54.523 }, 00:25:54.523 { 00:25:54.523 "name": "BaseBdev2", 00:25:54.523 "uuid": "d474be1a-a8a3-4a40-88c2-2d319be36978", 00:25:54.523 "is_configured": true, 00:25:54.523 "data_offset": 2048, 00:25:54.523 "data_size": 63488 00:25:54.523 }, 00:25:54.523 { 00:25:54.523 "name": "BaseBdev3", 00:25:54.523 "uuid": "7ca3d8ad-9489-4102-a4b0-7d95a3c451ff", 00:25:54.523 "is_configured": true, 00:25:54.523 "data_offset": 2048, 00:25:54.523 "data_size": 63488 00:25:54.523 }, 00:25:54.523 { 00:25:54.523 "name": "BaseBdev4", 00:25:54.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.523 "is_configured": false, 00:25:54.523 "data_offset": 0, 00:25:54.523 "data_size": 0 00:25:54.523 } 00:25:54.523 ] 00:25:54.523 }' 00:25:54.523 04:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:54.523 04:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.461 04:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:55.461 [2024-07-23 04:21:04.133904] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:55.461 [2024-07-23 04:21:04.134197] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:25:55.461 [2024-07-23 04:21:04.134222] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:55.461 [2024-07-23 04:21:04.134548] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:25:55.461 [2024-07-23 04:21:04.134796] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:25:55.461 [2024-07-23 04:21:04.134815] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:25:55.461 BaseBdev4 00:25:55.461 [2024-07-23 04:21:04.134996] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:55.461 04:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:25:55.461 04:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:25:55.461 04:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:55.461 04:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:25:55.461 04:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:55.461 04:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:55.461 04:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:55.721 04:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:55.980 [ 00:25:55.980 { 00:25:55.980 "name": "BaseBdev4", 00:25:55.980 "aliases": [ 00:25:55.980 "ab75695b-5830-4c14-bc1a-47f4bb66427b" 00:25:55.980 ], 00:25:55.980 "product_name": "Malloc disk", 00:25:55.980 "block_size": 512, 00:25:55.980 "num_blocks": 65536, 00:25:55.980 "uuid": "ab75695b-5830-4c14-bc1a-47f4bb66427b", 00:25:55.980 "assigned_rate_limits": { 00:25:55.980 "rw_ios_per_sec": 0, 00:25:55.980 "rw_mbytes_per_sec": 0, 00:25:55.980 "r_mbytes_per_sec": 0, 00:25:55.980 "w_mbytes_per_sec": 0 00:25:55.980 }, 00:25:55.980 "claimed": true, 00:25:55.980 "claim_type": "exclusive_write", 00:25:55.980 "zoned": false, 00:25:55.980 "supported_io_types": { 00:25:55.980 "read": true, 00:25:55.980 "write": true, 00:25:55.980 "unmap": true, 00:25:55.980 "flush": true, 00:25:55.980 "reset": true, 00:25:55.980 "nvme_admin": false, 00:25:55.980 "nvme_io": false, 00:25:55.980 "nvme_io_md": false, 00:25:55.980 "write_zeroes": true, 00:25:55.980 "zcopy": true, 00:25:55.980 "get_zone_info": false, 00:25:55.980 "zone_management": false, 00:25:55.980 "zone_append": false, 00:25:55.980 "compare": false, 00:25:55.980 "compare_and_write": false, 00:25:55.980 "abort": true, 00:25:55.980 "seek_hole": false, 00:25:55.980 "seek_data": false, 00:25:55.980 "copy": true, 00:25:55.980 "nvme_iov_md": false 00:25:55.980 }, 00:25:55.980 "memory_domains": [ 00:25:55.980 { 00:25:55.980 "dma_device_id": "system", 00:25:55.980 "dma_device_type": 1 00:25:55.980 }, 00:25:55.980 { 00:25:55.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:55.980 "dma_device_type": 2 00:25:55.980 } 00:25:55.980 ], 00:25:55.980 "driver_specific": {} 00:25:55.980 } 00:25:55.980 ] 00:25:55.980 04:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:25:55.980 04:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:55.980 04:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:55.980 04:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:55.980 04:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:55.980 04:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:55.980 04:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:55.980 04:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:55.980 04:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:55.980 04:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:55.980 04:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:55.980 04:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:55.980 04:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:55.980 04:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.980 04:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:56.240 04:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:56.240 "name": "Existed_Raid", 00:25:56.240 "uuid": "dea455fa-c943-4246-a25b-4c4a8051c4d2", 00:25:56.240 "strip_size_kb": 0, 00:25:56.240 "state": "online", 00:25:56.240 "raid_level": "raid1", 00:25:56.240 "superblock": true, 00:25:56.240 "num_base_bdevs": 4, 00:25:56.240 "num_base_bdevs_discovered": 4, 00:25:56.240 "num_base_bdevs_operational": 4, 00:25:56.240 "base_bdevs_list": [ 00:25:56.240 { 00:25:56.240 "name": "BaseBdev1", 00:25:56.240 "uuid": "01656b5c-ab12-45b4-9928-ef19f1aaf89c", 00:25:56.240 "is_configured": true, 00:25:56.240 "data_offset": 2048, 00:25:56.240 "data_size": 63488 00:25:56.240 }, 00:25:56.240 { 00:25:56.240 "name": "BaseBdev2", 00:25:56.240 "uuid": "d474be1a-a8a3-4a40-88c2-2d319be36978", 00:25:56.240 "is_configured": true, 00:25:56.240 "data_offset": 2048, 00:25:56.240 "data_size": 63488 00:25:56.240 }, 00:25:56.240 { 00:25:56.240 "name": "BaseBdev3", 00:25:56.240 "uuid": "7ca3d8ad-9489-4102-a4b0-7d95a3c451ff", 00:25:56.240 "is_configured": true, 00:25:56.240 "data_offset": 2048, 00:25:56.240 "data_size": 63488 00:25:56.240 }, 00:25:56.240 { 00:25:56.240 "name": "BaseBdev4", 00:25:56.240 "uuid": "ab75695b-5830-4c14-bc1a-47f4bb66427b", 00:25:56.240 "is_configured": true, 00:25:56.240 "data_offset": 2048, 00:25:56.240 "data_size": 63488 00:25:56.240 } 00:25:56.240 ] 00:25:56.240 }' 00:25:56.240 04:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:56.240 04:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.809 04:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:56.809 04:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:56.809 04:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:56.809 04:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:56.809 04:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:56.809 04:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:25:56.809 04:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:56.809 04:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:56.809 [2024-07-23 04:21:05.582328] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:57.068 04:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:57.068 "name": "Existed_Raid", 00:25:57.069 "aliases": [ 00:25:57.069 "dea455fa-c943-4246-a25b-4c4a8051c4d2" 00:25:57.069 ], 00:25:57.069 "product_name": "Raid Volume", 00:25:57.069 "block_size": 512, 00:25:57.069 "num_blocks": 63488, 00:25:57.069 "uuid": "dea455fa-c943-4246-a25b-4c4a8051c4d2", 00:25:57.069 "assigned_rate_limits": { 00:25:57.069 "rw_ios_per_sec": 0, 00:25:57.069 "rw_mbytes_per_sec": 0, 00:25:57.069 "r_mbytes_per_sec": 0, 00:25:57.069 "w_mbytes_per_sec": 0 00:25:57.069 }, 00:25:57.069 "claimed": false, 00:25:57.069 "zoned": false, 00:25:57.069 "supported_io_types": { 00:25:57.069 "read": true, 00:25:57.069 "write": true, 00:25:57.069 "unmap": false, 00:25:57.069 "flush": false, 00:25:57.069 "reset": true, 00:25:57.069 "nvme_admin": false, 00:25:57.069 "nvme_io": false, 00:25:57.069 "nvme_io_md": false, 00:25:57.069 "write_zeroes": true, 00:25:57.069 "zcopy": false, 00:25:57.069 "get_zone_info": false, 00:25:57.069 "zone_management": false, 00:25:57.069 "zone_append": false, 00:25:57.069 "compare": false, 00:25:57.069 "compare_and_write": false, 00:25:57.069 "abort": false, 00:25:57.069 "seek_hole": false, 00:25:57.069 "seek_data": false, 00:25:57.069 "copy": false, 00:25:57.069 "nvme_iov_md": false 00:25:57.069 }, 00:25:57.069 "memory_domains": [ 00:25:57.069 { 00:25:57.069 "dma_device_id": "system", 00:25:57.069 "dma_device_type": 1 00:25:57.069 }, 00:25:57.069 { 00:25:57.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:57.069 "dma_device_type": 2 00:25:57.069 }, 00:25:57.069 { 00:25:57.069 "dma_device_id": "system", 00:25:57.069 "dma_device_type": 1 00:25:57.069 }, 00:25:57.069 { 00:25:57.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:57.069 "dma_device_type": 2 00:25:57.069 }, 00:25:57.069 { 00:25:57.069 "dma_device_id": "system", 00:25:57.069 "dma_device_type": 1 00:25:57.069 }, 00:25:57.069 { 00:25:57.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:57.069 "dma_device_type": 2 00:25:57.069 }, 00:25:57.069 { 00:25:57.069 "dma_device_id": "system", 00:25:57.069 "dma_device_type": 1 00:25:57.069 }, 00:25:57.069 { 00:25:57.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:57.069 "dma_device_type": 2 00:25:57.069 } 00:25:57.069 ], 00:25:57.069 "driver_specific": { 00:25:57.069 "raid": { 00:25:57.069 "uuid": "dea455fa-c943-4246-a25b-4c4a8051c4d2", 00:25:57.069 "strip_size_kb": 0, 00:25:57.069 "state": "online", 00:25:57.069 "raid_level": "raid1", 00:25:57.069 "superblock": true, 00:25:57.069 "num_base_bdevs": 4, 00:25:57.069 "num_base_bdevs_discovered": 4, 00:25:57.069 "num_base_bdevs_operational": 4, 00:25:57.069 "base_bdevs_list": [ 00:25:57.069 { 00:25:57.069 "name": "BaseBdev1", 00:25:57.069 "uuid": "01656b5c-ab12-45b4-9928-ef19f1aaf89c", 00:25:57.069 "is_configured": true, 00:25:57.069 "data_offset": 2048, 00:25:57.069 "data_size": 63488 00:25:57.069 }, 00:25:57.069 { 00:25:57.069 "name": "BaseBdev2", 00:25:57.069 "uuid": "d474be1a-a8a3-4a40-88c2-2d319be36978", 00:25:57.069 "is_configured": true, 00:25:57.069 "data_offset": 2048, 00:25:57.069 "data_size": 63488 00:25:57.069 }, 00:25:57.069 { 00:25:57.069 "name": "BaseBdev3", 00:25:57.069 "uuid": "7ca3d8ad-9489-4102-a4b0-7d95a3c451ff", 00:25:57.069 "is_configured": true, 00:25:57.069 "data_offset": 2048, 00:25:57.069 "data_size": 63488 00:25:57.069 }, 00:25:57.069 { 00:25:57.069 "name": "BaseBdev4", 00:25:57.069 "uuid": "ab75695b-5830-4c14-bc1a-47f4bb66427b", 00:25:57.069 "is_configured": true, 00:25:57.069 "data_offset": 2048, 00:25:57.069 "data_size": 63488 00:25:57.069 } 00:25:57.069 ] 00:25:57.069 } 00:25:57.069 } 00:25:57.069 }' 00:25:57.069 04:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:57.069 04:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:57.069 BaseBdev2 00:25:57.069 BaseBdev3 00:25:57.069 BaseBdev4' 00:25:57.069 04:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:57.069 04:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:57.069 04:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:57.328 04:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:57.328 "name": "BaseBdev1", 00:25:57.328 "aliases": [ 00:25:57.328 "01656b5c-ab12-45b4-9928-ef19f1aaf89c" 00:25:57.328 ], 00:25:57.328 "product_name": "Malloc disk", 00:25:57.328 "block_size": 512, 00:25:57.328 "num_blocks": 65536, 00:25:57.328 "uuid": "01656b5c-ab12-45b4-9928-ef19f1aaf89c", 00:25:57.328 "assigned_rate_limits": { 00:25:57.328 "rw_ios_per_sec": 0, 00:25:57.328 "rw_mbytes_per_sec": 0, 00:25:57.328 "r_mbytes_per_sec": 0, 00:25:57.328 "w_mbytes_per_sec": 0 00:25:57.328 }, 00:25:57.328 "claimed": true, 00:25:57.328 "claim_type": "exclusive_write", 00:25:57.328 "zoned": false, 00:25:57.328 "supported_io_types": { 00:25:57.328 "read": true, 00:25:57.328 "write": true, 00:25:57.328 "unmap": true, 00:25:57.328 "flush": true, 00:25:57.328 "reset": true, 00:25:57.328 "nvme_admin": false, 00:25:57.328 "nvme_io": false, 00:25:57.328 "nvme_io_md": false, 00:25:57.328 "write_zeroes": true, 00:25:57.328 "zcopy": true, 00:25:57.328 "get_zone_info": false, 00:25:57.328 "zone_management": false, 00:25:57.328 "zone_append": false, 00:25:57.328 "compare": false, 00:25:57.328 "compare_and_write": false, 00:25:57.328 "abort": true, 00:25:57.328 "seek_hole": false, 00:25:57.328 "seek_data": false, 00:25:57.328 "copy": true, 00:25:57.328 "nvme_iov_md": false 00:25:57.328 }, 00:25:57.328 "memory_domains": [ 00:25:57.328 { 00:25:57.328 "dma_device_id": "system", 00:25:57.328 "dma_device_type": 1 00:25:57.329 }, 00:25:57.329 { 00:25:57.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:57.329 "dma_device_type": 2 00:25:57.329 } 00:25:57.329 ], 00:25:57.329 "driver_specific": {} 00:25:57.329 }' 00:25:57.329 04:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:57.329 04:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:57.329 04:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:57.329 04:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:57.329 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:57.329 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:57.329 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:57.329 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:57.587 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:57.587 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:57.587 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:57.587 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:57.587 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:57.587 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:57.587 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:57.845 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:57.845 "name": "BaseBdev2", 00:25:57.845 "aliases": [ 00:25:57.845 "d474be1a-a8a3-4a40-88c2-2d319be36978" 00:25:57.845 ], 00:25:57.845 "product_name": "Malloc disk", 00:25:57.845 "block_size": 512, 00:25:57.845 "num_blocks": 65536, 00:25:57.845 "uuid": "d474be1a-a8a3-4a40-88c2-2d319be36978", 00:25:57.845 "assigned_rate_limits": { 00:25:57.845 "rw_ios_per_sec": 0, 00:25:57.845 "rw_mbytes_per_sec": 0, 00:25:57.845 "r_mbytes_per_sec": 0, 00:25:57.845 "w_mbytes_per_sec": 0 00:25:57.845 }, 00:25:57.845 "claimed": true, 00:25:57.845 "claim_type": "exclusive_write", 00:25:57.845 "zoned": false, 00:25:57.845 "supported_io_types": { 00:25:57.845 "read": true, 00:25:57.845 "write": true, 00:25:57.845 "unmap": true, 00:25:57.845 "flush": true, 00:25:57.845 "reset": true, 00:25:57.845 "nvme_admin": false, 00:25:57.845 "nvme_io": false, 00:25:57.845 "nvme_io_md": false, 00:25:57.845 "write_zeroes": true, 00:25:57.845 "zcopy": true, 00:25:57.845 "get_zone_info": false, 00:25:57.845 "zone_management": false, 00:25:57.845 "zone_append": false, 00:25:57.845 "compare": false, 00:25:57.845 "compare_and_write": false, 00:25:57.845 "abort": true, 00:25:57.845 "seek_hole": false, 00:25:57.845 "seek_data": false, 00:25:57.845 "copy": true, 00:25:57.845 "nvme_iov_md": false 00:25:57.845 }, 00:25:57.845 "memory_domains": [ 00:25:57.845 { 00:25:57.845 "dma_device_id": "system", 00:25:57.845 "dma_device_type": 1 00:25:57.845 }, 00:25:57.845 { 00:25:57.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:57.845 "dma_device_type": 2 00:25:57.845 } 00:25:57.845 ], 00:25:57.845 "driver_specific": {} 00:25:57.845 }' 00:25:57.845 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:57.845 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:57.845 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:57.845 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:57.845 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:57.845 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:57.845 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:58.104 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:58.104 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:58.104 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:58.104 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:58.104 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:58.104 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:58.104 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:58.104 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:58.362 04:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:58.362 "name": "BaseBdev3", 00:25:58.362 "aliases": [ 00:25:58.362 "7ca3d8ad-9489-4102-a4b0-7d95a3c451ff" 00:25:58.362 ], 00:25:58.362 "product_name": "Malloc disk", 00:25:58.362 "block_size": 512, 00:25:58.362 "num_blocks": 65536, 00:25:58.362 "uuid": "7ca3d8ad-9489-4102-a4b0-7d95a3c451ff", 00:25:58.362 "assigned_rate_limits": { 00:25:58.362 "rw_ios_per_sec": 0, 00:25:58.362 "rw_mbytes_per_sec": 0, 00:25:58.362 "r_mbytes_per_sec": 0, 00:25:58.362 "w_mbytes_per_sec": 0 00:25:58.362 }, 00:25:58.362 "claimed": true, 00:25:58.362 "claim_type": "exclusive_write", 00:25:58.362 "zoned": false, 00:25:58.362 "supported_io_types": { 00:25:58.362 "read": true, 00:25:58.362 "write": true, 00:25:58.362 "unmap": true, 00:25:58.362 "flush": true, 00:25:58.362 "reset": true, 00:25:58.362 "nvme_admin": false, 00:25:58.362 "nvme_io": false, 00:25:58.362 "nvme_io_md": false, 00:25:58.362 "write_zeroes": true, 00:25:58.362 "zcopy": true, 00:25:58.362 "get_zone_info": false, 00:25:58.362 "zone_management": false, 00:25:58.362 "zone_append": false, 00:25:58.362 "compare": false, 00:25:58.362 "compare_and_write": false, 00:25:58.362 "abort": true, 00:25:58.362 "seek_hole": false, 00:25:58.362 "seek_data": false, 00:25:58.362 "copy": true, 00:25:58.362 "nvme_iov_md": false 00:25:58.362 }, 00:25:58.362 "memory_domains": [ 00:25:58.362 { 00:25:58.362 "dma_device_id": "system", 00:25:58.362 "dma_device_type": 1 00:25:58.362 }, 00:25:58.362 { 00:25:58.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.362 "dma_device_type": 2 00:25:58.362 } 00:25:58.362 ], 00:25:58.362 "driver_specific": {} 00:25:58.362 }' 00:25:58.362 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:58.363 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:58.363 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:58.363 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:58.363 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:58.621 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:58.621 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:58.621 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:58.621 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:58.621 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:58.621 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:58.621 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:58.621 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:58.621 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:58.621 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:58.880 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:58.880 "name": "BaseBdev4", 00:25:58.880 "aliases": [ 00:25:58.880 "ab75695b-5830-4c14-bc1a-47f4bb66427b" 00:25:58.880 ], 00:25:58.880 "product_name": "Malloc disk", 00:25:58.880 "block_size": 512, 00:25:58.880 "num_blocks": 65536, 00:25:58.880 "uuid": "ab75695b-5830-4c14-bc1a-47f4bb66427b", 00:25:58.880 "assigned_rate_limits": { 00:25:58.880 "rw_ios_per_sec": 0, 00:25:58.880 "rw_mbytes_per_sec": 0, 00:25:58.880 "r_mbytes_per_sec": 0, 00:25:58.880 "w_mbytes_per_sec": 0 00:25:58.880 }, 00:25:58.880 "claimed": true, 00:25:58.880 "claim_type": "exclusive_write", 00:25:58.880 "zoned": false, 00:25:58.880 "supported_io_types": { 00:25:58.880 "read": true, 00:25:58.880 "write": true, 00:25:58.880 "unmap": true, 00:25:58.880 "flush": true, 00:25:58.880 "reset": true, 00:25:58.880 "nvme_admin": false, 00:25:58.880 "nvme_io": false, 00:25:58.880 "nvme_io_md": false, 00:25:58.880 "write_zeroes": true, 00:25:58.880 "zcopy": true, 00:25:58.880 "get_zone_info": false, 00:25:58.880 "zone_management": false, 00:25:58.880 "zone_append": false, 00:25:58.880 "compare": false, 00:25:58.880 "compare_and_write": false, 00:25:58.880 "abort": true, 00:25:58.880 "seek_hole": false, 00:25:58.880 "seek_data": false, 00:25:58.880 "copy": true, 00:25:58.880 "nvme_iov_md": false 00:25:58.880 }, 00:25:58.880 "memory_domains": [ 00:25:58.880 { 00:25:58.880 "dma_device_id": "system", 00:25:58.880 "dma_device_type": 1 00:25:58.880 }, 00:25:58.880 { 00:25:58.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.880 "dma_device_type": 2 00:25:58.880 } 00:25:58.880 ], 00:25:58.880 "driver_specific": {} 00:25:58.880 }' 00:25:58.880 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:58.880 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:58.880 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:58.880 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:59.138 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:59.138 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:59.138 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:59.139 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:59.139 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:59.139 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:59.139 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:59.139 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:59.139 04:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:59.397 [2024-07-23 04:21:08.104784] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:59.397 04:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:59.397 04:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:25:59.397 04:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:59.397 04:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:25:59.397 04:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:25:59.397 04:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:25:59.397 04:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:59.397 04:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:59.397 04:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:59.397 04:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:59.397 04:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:59.397 04:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:59.397 04:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:59.397 04:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:59.397 04:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:59.397 04:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.397 04:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:59.655 04:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:59.655 "name": "Existed_Raid", 00:25:59.655 "uuid": "dea455fa-c943-4246-a25b-4c4a8051c4d2", 00:25:59.655 "strip_size_kb": 0, 00:25:59.655 "state": "online", 00:25:59.655 "raid_level": "raid1", 00:25:59.655 "superblock": true, 00:25:59.655 "num_base_bdevs": 4, 00:25:59.655 "num_base_bdevs_discovered": 3, 00:25:59.655 "num_base_bdevs_operational": 3, 00:25:59.655 "base_bdevs_list": [ 00:25:59.655 { 00:25:59.655 "name": null, 00:25:59.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.655 "is_configured": false, 00:25:59.655 "data_offset": 2048, 00:25:59.655 "data_size": 63488 00:25:59.655 }, 00:25:59.655 { 00:25:59.655 "name": "BaseBdev2", 00:25:59.655 "uuid": "d474be1a-a8a3-4a40-88c2-2d319be36978", 00:25:59.655 "is_configured": true, 00:25:59.655 "data_offset": 2048, 00:25:59.655 "data_size": 63488 00:25:59.655 }, 00:25:59.655 { 00:25:59.655 "name": "BaseBdev3", 00:25:59.655 "uuid": "7ca3d8ad-9489-4102-a4b0-7d95a3c451ff", 00:25:59.655 "is_configured": true, 00:25:59.655 "data_offset": 2048, 00:25:59.655 "data_size": 63488 00:25:59.655 }, 00:25:59.655 { 00:25:59.655 "name": "BaseBdev4", 00:25:59.655 "uuid": "ab75695b-5830-4c14-bc1a-47f4bb66427b", 00:25:59.655 "is_configured": true, 00:25:59.655 "data_offset": 2048, 00:25:59.655 "data_size": 63488 00:25:59.655 } 00:25:59.655 ] 00:25:59.655 }' 00:25:59.655 04:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:59.655 04:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:00.232 04:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:00.232 04:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:00.232 04:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.232 04:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:00.489 04:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:00.489 04:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:00.489 04:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:00.747 [2024-07-23 04:21:09.410197] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:01.005 04:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:01.006 04:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:01.006 04:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.006 04:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:01.264 04:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:01.264 04:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:01.264 04:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:01.264 [2024-07-23 04:21:10.009295] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:01.522 04:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:01.522 04:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:01.522 04:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.522 04:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:01.780 04:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:01.780 04:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:01.780 04:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:26:02.038 [2024-07-23 04:21:10.601684] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:02.038 [2024-07-23 04:21:10.601796] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:02.038 [2024-07-23 04:21:10.730232] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:02.038 [2024-07-23 04:21:10.730290] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:02.038 [2024-07-23 04:21:10.730309] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:26:02.038 04:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:02.038 04:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:02.038 04:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.038 04:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:26:02.296 04:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:26:02.296 04:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:26:02.296 04:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:26:02.296 04:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:26:02.296 04:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:02.296 04:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:02.553 BaseBdev2 00:26:02.553 04:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:26:02.553 04:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:26:02.553 04:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:02.553 04:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:02.553 04:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:02.553 04:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:02.553 04:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:02.811 04:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:03.068 [ 00:26:03.068 { 00:26:03.068 "name": "BaseBdev2", 00:26:03.068 "aliases": [ 00:26:03.068 "97081252-936f-434c-87e0-6cac7bef0c3d" 00:26:03.068 ], 00:26:03.068 "product_name": "Malloc disk", 00:26:03.068 "block_size": 512, 00:26:03.068 "num_blocks": 65536, 00:26:03.068 "uuid": "97081252-936f-434c-87e0-6cac7bef0c3d", 00:26:03.068 "assigned_rate_limits": { 00:26:03.068 "rw_ios_per_sec": 0, 00:26:03.068 "rw_mbytes_per_sec": 0, 00:26:03.068 "r_mbytes_per_sec": 0, 00:26:03.068 "w_mbytes_per_sec": 0 00:26:03.068 }, 00:26:03.068 "claimed": false, 00:26:03.068 "zoned": false, 00:26:03.068 "supported_io_types": { 00:26:03.068 "read": true, 00:26:03.068 "write": true, 00:26:03.068 "unmap": true, 00:26:03.068 "flush": true, 00:26:03.068 "reset": true, 00:26:03.068 "nvme_admin": false, 00:26:03.068 "nvme_io": false, 00:26:03.068 "nvme_io_md": false, 00:26:03.068 "write_zeroes": true, 00:26:03.068 "zcopy": true, 00:26:03.068 "get_zone_info": false, 00:26:03.068 "zone_management": false, 00:26:03.068 "zone_append": false, 00:26:03.068 "compare": false, 00:26:03.068 "compare_and_write": false, 00:26:03.068 "abort": true, 00:26:03.068 "seek_hole": false, 00:26:03.068 "seek_data": false, 00:26:03.068 "copy": true, 00:26:03.068 "nvme_iov_md": false 00:26:03.068 }, 00:26:03.068 "memory_domains": [ 00:26:03.068 { 00:26:03.068 "dma_device_id": "system", 00:26:03.068 "dma_device_type": 1 00:26:03.068 }, 00:26:03.068 { 00:26:03.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:03.068 "dma_device_type": 2 00:26:03.068 } 00:26:03.069 ], 00:26:03.069 "driver_specific": {} 00:26:03.069 } 00:26:03.069 ] 00:26:03.069 04:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:03.069 04:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:03.069 04:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:03.069 04:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:03.326 BaseBdev3 00:26:03.326 04:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:26:03.326 04:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:26:03.326 04:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:03.326 04:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:03.326 04:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:03.326 04:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:03.326 04:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:03.584 04:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:03.842 [ 00:26:03.842 { 00:26:03.842 "name": "BaseBdev3", 00:26:03.842 "aliases": [ 00:26:03.842 "cbf9a261-49cd-46cd-9823-a821234e8c95" 00:26:03.842 ], 00:26:03.842 "product_name": "Malloc disk", 00:26:03.842 "block_size": 512, 00:26:03.842 "num_blocks": 65536, 00:26:03.842 "uuid": "cbf9a261-49cd-46cd-9823-a821234e8c95", 00:26:03.842 "assigned_rate_limits": { 00:26:03.842 "rw_ios_per_sec": 0, 00:26:03.842 "rw_mbytes_per_sec": 0, 00:26:03.842 "r_mbytes_per_sec": 0, 00:26:03.842 "w_mbytes_per_sec": 0 00:26:03.842 }, 00:26:03.842 "claimed": false, 00:26:03.842 "zoned": false, 00:26:03.842 "supported_io_types": { 00:26:03.842 "read": true, 00:26:03.842 "write": true, 00:26:03.842 "unmap": true, 00:26:03.842 "flush": true, 00:26:03.842 "reset": true, 00:26:03.842 "nvme_admin": false, 00:26:03.842 "nvme_io": false, 00:26:03.842 "nvme_io_md": false, 00:26:03.842 "write_zeroes": true, 00:26:03.842 "zcopy": true, 00:26:03.842 "get_zone_info": false, 00:26:03.842 "zone_management": false, 00:26:03.842 "zone_append": false, 00:26:03.842 "compare": false, 00:26:03.842 "compare_and_write": false, 00:26:03.842 "abort": true, 00:26:03.842 "seek_hole": false, 00:26:03.842 "seek_data": false, 00:26:03.842 "copy": true, 00:26:03.842 "nvme_iov_md": false 00:26:03.842 }, 00:26:03.842 "memory_domains": [ 00:26:03.842 { 00:26:03.842 "dma_device_id": "system", 00:26:03.842 "dma_device_type": 1 00:26:03.842 }, 00:26:03.842 { 00:26:03.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:03.842 "dma_device_type": 2 00:26:03.842 } 00:26:03.842 ], 00:26:03.842 "driver_specific": {} 00:26:03.842 } 00:26:03.842 ] 00:26:03.842 04:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:03.842 04:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:03.842 04:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:03.842 04:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:04.100 BaseBdev4 00:26:04.100 04:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:26:04.100 04:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:26:04.100 04:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:04.100 04:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:04.100 04:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:04.100 04:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:04.100 04:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:04.359 04:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:04.359 [ 00:26:04.359 { 00:26:04.359 "name": "BaseBdev4", 00:26:04.359 "aliases": [ 00:26:04.359 "24808681-0705-4ec3-9f34-a5985f299e1b" 00:26:04.359 ], 00:26:04.359 "product_name": "Malloc disk", 00:26:04.359 "block_size": 512, 00:26:04.359 "num_blocks": 65536, 00:26:04.359 "uuid": "24808681-0705-4ec3-9f34-a5985f299e1b", 00:26:04.359 "assigned_rate_limits": { 00:26:04.359 "rw_ios_per_sec": 0, 00:26:04.359 "rw_mbytes_per_sec": 0, 00:26:04.359 "r_mbytes_per_sec": 0, 00:26:04.359 "w_mbytes_per_sec": 0 00:26:04.359 }, 00:26:04.359 "claimed": false, 00:26:04.359 "zoned": false, 00:26:04.359 "supported_io_types": { 00:26:04.359 "read": true, 00:26:04.359 "write": true, 00:26:04.359 "unmap": true, 00:26:04.359 "flush": true, 00:26:04.359 "reset": true, 00:26:04.359 "nvme_admin": false, 00:26:04.359 "nvme_io": false, 00:26:04.359 "nvme_io_md": false, 00:26:04.359 "write_zeroes": true, 00:26:04.359 "zcopy": true, 00:26:04.359 "get_zone_info": false, 00:26:04.359 "zone_management": false, 00:26:04.359 "zone_append": false, 00:26:04.359 "compare": false, 00:26:04.359 "compare_and_write": false, 00:26:04.359 "abort": true, 00:26:04.359 "seek_hole": false, 00:26:04.359 "seek_data": false, 00:26:04.359 "copy": true, 00:26:04.359 "nvme_iov_md": false 00:26:04.359 }, 00:26:04.359 "memory_domains": [ 00:26:04.359 { 00:26:04.359 "dma_device_id": "system", 00:26:04.359 "dma_device_type": 1 00:26:04.359 }, 00:26:04.359 { 00:26:04.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.359 "dma_device_type": 2 00:26:04.359 } 00:26:04.359 ], 00:26:04.359 "driver_specific": {} 00:26:04.359 } 00:26:04.359 ] 00:26:04.617 04:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:04.617 04:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:04.617 04:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:04.617 04:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:04.617 [2024-07-23 04:21:13.357052] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:04.617 [2024-07-23 04:21:13.357102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:04.617 [2024-07-23 04:21:13.357129] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:04.617 [2024-07-23 04:21:13.359453] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:04.617 [2024-07-23 04:21:13.359510] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:04.617 04:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:04.617 04:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:04.617 04:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:04.617 04:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:04.617 04:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:04.617 04:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:04.617 04:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:04.617 04:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:04.617 04:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:04.617 04:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:04.617 04:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.617 04:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:04.875 04:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:04.875 "name": "Existed_Raid", 00:26:04.875 "uuid": "8e771e62-44a0-4de3-a2e4-4ff7dadfd824", 00:26:04.875 "strip_size_kb": 0, 00:26:04.875 "state": "configuring", 00:26:04.875 "raid_level": "raid1", 00:26:04.875 "superblock": true, 00:26:04.875 "num_base_bdevs": 4, 00:26:04.875 "num_base_bdevs_discovered": 3, 00:26:04.875 "num_base_bdevs_operational": 4, 00:26:04.875 "base_bdevs_list": [ 00:26:04.875 { 00:26:04.875 "name": "BaseBdev1", 00:26:04.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.875 "is_configured": false, 00:26:04.875 "data_offset": 0, 00:26:04.875 "data_size": 0 00:26:04.875 }, 00:26:04.875 { 00:26:04.875 "name": "BaseBdev2", 00:26:04.875 "uuid": "97081252-936f-434c-87e0-6cac7bef0c3d", 00:26:04.875 "is_configured": true, 00:26:04.875 "data_offset": 2048, 00:26:04.875 "data_size": 63488 00:26:04.875 }, 00:26:04.875 { 00:26:04.875 "name": "BaseBdev3", 00:26:04.875 "uuid": "cbf9a261-49cd-46cd-9823-a821234e8c95", 00:26:04.875 "is_configured": true, 00:26:04.875 "data_offset": 2048, 00:26:04.875 "data_size": 63488 00:26:04.875 }, 00:26:04.875 { 00:26:04.875 "name": "BaseBdev4", 00:26:04.875 "uuid": "24808681-0705-4ec3-9f34-a5985f299e1b", 00:26:04.875 "is_configured": true, 00:26:04.875 "data_offset": 2048, 00:26:04.875 "data_size": 63488 00:26:04.875 } 00:26:04.875 ] 00:26:04.875 }' 00:26:04.875 04:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:04.875 04:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.441 04:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:05.701 [2024-07-23 04:21:14.391971] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:05.701 04:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:05.701 04:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:05.701 04:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:05.701 04:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:05.701 04:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:05.701 04:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:05.701 04:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:05.701 04:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:05.701 04:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:05.701 04:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:05.701 04:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.701 04:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:05.960 04:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:05.960 "name": "Existed_Raid", 00:26:05.960 "uuid": "8e771e62-44a0-4de3-a2e4-4ff7dadfd824", 00:26:05.960 "strip_size_kb": 0, 00:26:05.960 "state": "configuring", 00:26:05.960 "raid_level": "raid1", 00:26:05.960 "superblock": true, 00:26:05.960 "num_base_bdevs": 4, 00:26:05.960 "num_base_bdevs_discovered": 2, 00:26:05.960 "num_base_bdevs_operational": 4, 00:26:05.960 "base_bdevs_list": [ 00:26:05.960 { 00:26:05.960 "name": "BaseBdev1", 00:26:05.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.960 "is_configured": false, 00:26:05.960 "data_offset": 0, 00:26:05.960 "data_size": 0 00:26:05.960 }, 00:26:05.960 { 00:26:05.960 "name": null, 00:26:05.960 "uuid": "97081252-936f-434c-87e0-6cac7bef0c3d", 00:26:05.960 "is_configured": false, 00:26:05.960 "data_offset": 2048, 00:26:05.960 "data_size": 63488 00:26:05.960 }, 00:26:05.960 { 00:26:05.960 "name": "BaseBdev3", 00:26:05.960 "uuid": "cbf9a261-49cd-46cd-9823-a821234e8c95", 00:26:05.960 "is_configured": true, 00:26:05.960 "data_offset": 2048, 00:26:05.960 "data_size": 63488 00:26:05.960 }, 00:26:05.960 { 00:26:05.960 "name": "BaseBdev4", 00:26:05.960 "uuid": "24808681-0705-4ec3-9f34-a5985f299e1b", 00:26:05.960 "is_configured": true, 00:26:05.960 "data_offset": 2048, 00:26:05.960 "data_size": 63488 00:26:05.960 } 00:26:05.960 ] 00:26:05.960 }' 00:26:05.960 04:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:05.960 04:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.526 04:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.526 04:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:06.784 04:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:26:06.784 04:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:07.042 [2024-07-23 04:21:15.699890] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:07.042 BaseBdev1 00:26:07.042 04:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:26:07.042 04:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:26:07.042 04:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:07.042 04:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:07.042 04:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:07.042 04:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:07.042 04:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:07.300 04:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:07.559 [ 00:26:07.559 { 00:26:07.559 "name": "BaseBdev1", 00:26:07.559 "aliases": [ 00:26:07.559 "d794eacc-ee5f-45e1-ba7a-55515f562290" 00:26:07.559 ], 00:26:07.559 "product_name": "Malloc disk", 00:26:07.559 "block_size": 512, 00:26:07.559 "num_blocks": 65536, 00:26:07.559 "uuid": "d794eacc-ee5f-45e1-ba7a-55515f562290", 00:26:07.559 "assigned_rate_limits": { 00:26:07.559 "rw_ios_per_sec": 0, 00:26:07.559 "rw_mbytes_per_sec": 0, 00:26:07.559 "r_mbytes_per_sec": 0, 00:26:07.559 "w_mbytes_per_sec": 0 00:26:07.559 }, 00:26:07.559 "claimed": true, 00:26:07.559 "claim_type": "exclusive_write", 00:26:07.559 "zoned": false, 00:26:07.559 "supported_io_types": { 00:26:07.559 "read": true, 00:26:07.559 "write": true, 00:26:07.559 "unmap": true, 00:26:07.559 "flush": true, 00:26:07.559 "reset": true, 00:26:07.559 "nvme_admin": false, 00:26:07.559 "nvme_io": false, 00:26:07.559 "nvme_io_md": false, 00:26:07.559 "write_zeroes": true, 00:26:07.559 "zcopy": true, 00:26:07.559 "get_zone_info": false, 00:26:07.559 "zone_management": false, 00:26:07.559 "zone_append": false, 00:26:07.559 "compare": false, 00:26:07.559 "compare_and_write": false, 00:26:07.559 "abort": true, 00:26:07.559 "seek_hole": false, 00:26:07.559 "seek_data": false, 00:26:07.559 "copy": true, 00:26:07.559 "nvme_iov_md": false 00:26:07.559 }, 00:26:07.559 "memory_domains": [ 00:26:07.559 { 00:26:07.559 "dma_device_id": "system", 00:26:07.559 "dma_device_type": 1 00:26:07.559 }, 00:26:07.559 { 00:26:07.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.559 "dma_device_type": 2 00:26:07.559 } 00:26:07.559 ], 00:26:07.559 "driver_specific": {} 00:26:07.559 } 00:26:07.559 ] 00:26:07.559 04:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:07.559 04:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:07.559 04:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:07.559 04:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:07.559 04:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:07.559 04:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:07.559 04:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:07.559 04:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:07.559 04:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:07.559 04:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:07.559 04:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:07.559 04:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:07.559 04:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:07.817 04:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:07.817 "name": "Existed_Raid", 00:26:07.817 "uuid": "8e771e62-44a0-4de3-a2e4-4ff7dadfd824", 00:26:07.817 "strip_size_kb": 0, 00:26:07.817 "state": "configuring", 00:26:07.817 "raid_level": "raid1", 00:26:07.817 "superblock": true, 00:26:07.817 "num_base_bdevs": 4, 00:26:07.817 "num_base_bdevs_discovered": 3, 00:26:07.817 "num_base_bdevs_operational": 4, 00:26:07.817 "base_bdevs_list": [ 00:26:07.817 { 00:26:07.817 "name": "BaseBdev1", 00:26:07.817 "uuid": "d794eacc-ee5f-45e1-ba7a-55515f562290", 00:26:07.817 "is_configured": true, 00:26:07.817 "data_offset": 2048, 00:26:07.817 "data_size": 63488 00:26:07.817 }, 00:26:07.817 { 00:26:07.817 "name": null, 00:26:07.817 "uuid": "97081252-936f-434c-87e0-6cac7bef0c3d", 00:26:07.817 "is_configured": false, 00:26:07.817 "data_offset": 2048, 00:26:07.817 "data_size": 63488 00:26:07.817 }, 00:26:07.817 { 00:26:07.817 "name": "BaseBdev3", 00:26:07.817 "uuid": "cbf9a261-49cd-46cd-9823-a821234e8c95", 00:26:07.817 "is_configured": true, 00:26:07.817 "data_offset": 2048, 00:26:07.817 "data_size": 63488 00:26:07.817 }, 00:26:07.817 { 00:26:07.817 "name": "BaseBdev4", 00:26:07.817 "uuid": "24808681-0705-4ec3-9f34-a5985f299e1b", 00:26:07.817 "is_configured": true, 00:26:07.817 "data_offset": 2048, 00:26:07.817 "data_size": 63488 00:26:07.817 } 00:26:07.817 ] 00:26:07.817 }' 00:26:07.817 04:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:07.817 04:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.384 04:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:08.384 04:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.642 04:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:26:08.642 04:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:26:08.642 [2024-07-23 04:21:17.412659] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:08.900 04:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:08.900 04:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:08.900 04:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:08.900 04:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:08.900 04:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:08.900 04:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:08.900 04:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:08.900 04:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:08.900 04:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:08.900 04:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:08.900 04:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.900 04:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:08.900 04:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:08.900 "name": "Existed_Raid", 00:26:08.900 "uuid": "8e771e62-44a0-4de3-a2e4-4ff7dadfd824", 00:26:08.900 "strip_size_kb": 0, 00:26:08.900 "state": "configuring", 00:26:08.900 "raid_level": "raid1", 00:26:08.900 "superblock": true, 00:26:08.900 "num_base_bdevs": 4, 00:26:08.900 "num_base_bdevs_discovered": 2, 00:26:08.900 "num_base_bdevs_operational": 4, 00:26:08.900 "base_bdevs_list": [ 00:26:08.900 { 00:26:08.900 "name": "BaseBdev1", 00:26:08.900 "uuid": "d794eacc-ee5f-45e1-ba7a-55515f562290", 00:26:08.900 "is_configured": true, 00:26:08.900 "data_offset": 2048, 00:26:08.900 "data_size": 63488 00:26:08.900 }, 00:26:08.900 { 00:26:08.900 "name": null, 00:26:08.900 "uuid": "97081252-936f-434c-87e0-6cac7bef0c3d", 00:26:08.900 "is_configured": false, 00:26:08.900 "data_offset": 2048, 00:26:08.900 "data_size": 63488 00:26:08.900 }, 00:26:08.900 { 00:26:08.900 "name": null, 00:26:08.900 "uuid": "cbf9a261-49cd-46cd-9823-a821234e8c95", 00:26:08.900 "is_configured": false, 00:26:08.900 "data_offset": 2048, 00:26:08.900 "data_size": 63488 00:26:08.900 }, 00:26:08.900 { 00:26:08.900 "name": "BaseBdev4", 00:26:08.900 "uuid": "24808681-0705-4ec3-9f34-a5985f299e1b", 00:26:08.900 "is_configured": true, 00:26:08.900 "data_offset": 2048, 00:26:08.900 "data_size": 63488 00:26:08.900 } 00:26:08.900 ] 00:26:08.900 }' 00:26:08.900 04:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:08.900 04:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.465 04:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.465 04:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:09.723 04:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:26:09.723 04:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:09.981 [2024-07-23 04:21:18.656037] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:09.981 04:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:09.981 04:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:09.981 04:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:09.981 04:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:09.981 04:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:09.981 04:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:09.981 04:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:09.981 04:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:09.981 04:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:09.981 04:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:09.981 04:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.981 04:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:10.239 04:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:10.239 "name": "Existed_Raid", 00:26:10.239 "uuid": "8e771e62-44a0-4de3-a2e4-4ff7dadfd824", 00:26:10.239 "strip_size_kb": 0, 00:26:10.239 "state": "configuring", 00:26:10.239 "raid_level": "raid1", 00:26:10.239 "superblock": true, 00:26:10.239 "num_base_bdevs": 4, 00:26:10.239 "num_base_bdevs_discovered": 3, 00:26:10.239 "num_base_bdevs_operational": 4, 00:26:10.239 "base_bdevs_list": [ 00:26:10.239 { 00:26:10.239 "name": "BaseBdev1", 00:26:10.239 "uuid": "d794eacc-ee5f-45e1-ba7a-55515f562290", 00:26:10.239 "is_configured": true, 00:26:10.239 "data_offset": 2048, 00:26:10.239 "data_size": 63488 00:26:10.239 }, 00:26:10.239 { 00:26:10.239 "name": null, 00:26:10.239 "uuid": "97081252-936f-434c-87e0-6cac7bef0c3d", 00:26:10.239 "is_configured": false, 00:26:10.239 "data_offset": 2048, 00:26:10.239 "data_size": 63488 00:26:10.239 }, 00:26:10.239 { 00:26:10.239 "name": "BaseBdev3", 00:26:10.239 "uuid": "cbf9a261-49cd-46cd-9823-a821234e8c95", 00:26:10.239 "is_configured": true, 00:26:10.239 "data_offset": 2048, 00:26:10.239 "data_size": 63488 00:26:10.239 }, 00:26:10.239 { 00:26:10.239 "name": "BaseBdev4", 00:26:10.239 "uuid": "24808681-0705-4ec3-9f34-a5985f299e1b", 00:26:10.239 "is_configured": true, 00:26:10.239 "data_offset": 2048, 00:26:10.239 "data_size": 63488 00:26:10.239 } 00:26:10.239 ] 00:26:10.239 }' 00:26:10.239 04:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:10.239 04:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.806 04:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:10.806 04:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.064 04:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:26:11.064 04:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:11.322 [2024-07-23 04:21:19.915526] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:11.322 04:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:11.322 04:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:11.322 04:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:11.322 04:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:11.322 04:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:11.322 04:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:11.322 04:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:11.322 04:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:11.322 04:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:11.322 04:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:11.322 04:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:11.322 04:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.583 04:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:11.583 "name": "Existed_Raid", 00:26:11.583 "uuid": "8e771e62-44a0-4de3-a2e4-4ff7dadfd824", 00:26:11.583 "strip_size_kb": 0, 00:26:11.583 "state": "configuring", 00:26:11.583 "raid_level": "raid1", 00:26:11.583 "superblock": true, 00:26:11.583 "num_base_bdevs": 4, 00:26:11.583 "num_base_bdevs_discovered": 2, 00:26:11.583 "num_base_bdevs_operational": 4, 00:26:11.583 "base_bdevs_list": [ 00:26:11.583 { 00:26:11.583 "name": null, 00:26:11.583 "uuid": "d794eacc-ee5f-45e1-ba7a-55515f562290", 00:26:11.583 "is_configured": false, 00:26:11.583 "data_offset": 2048, 00:26:11.583 "data_size": 63488 00:26:11.583 }, 00:26:11.583 { 00:26:11.583 "name": null, 00:26:11.583 "uuid": "97081252-936f-434c-87e0-6cac7bef0c3d", 00:26:11.583 "is_configured": false, 00:26:11.583 "data_offset": 2048, 00:26:11.583 "data_size": 63488 00:26:11.583 }, 00:26:11.583 { 00:26:11.583 "name": "BaseBdev3", 00:26:11.583 "uuid": "cbf9a261-49cd-46cd-9823-a821234e8c95", 00:26:11.583 "is_configured": true, 00:26:11.583 "data_offset": 2048, 00:26:11.583 "data_size": 63488 00:26:11.583 }, 00:26:11.583 { 00:26:11.583 "name": "BaseBdev4", 00:26:11.583 "uuid": "24808681-0705-4ec3-9f34-a5985f299e1b", 00:26:11.583 "is_configured": true, 00:26:11.583 "data_offset": 2048, 00:26:11.583 "data_size": 63488 00:26:11.583 } 00:26:11.583 ] 00:26:11.583 }' 00:26:11.583 04:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:11.583 04:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:12.197 04:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.197 04:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:12.454 04:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:26:12.454 04:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:12.712 [2024-07-23 04:21:21.324030] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:12.712 04:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:12.712 04:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:12.712 04:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:12.712 04:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:12.712 04:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:12.712 04:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:12.712 04:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:12.712 04:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:12.712 04:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:12.712 04:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:12.712 04:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.712 04:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:12.970 04:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:12.970 "name": "Existed_Raid", 00:26:12.970 "uuid": "8e771e62-44a0-4de3-a2e4-4ff7dadfd824", 00:26:12.970 "strip_size_kb": 0, 00:26:12.970 "state": "configuring", 00:26:12.970 "raid_level": "raid1", 00:26:12.970 "superblock": true, 00:26:12.970 "num_base_bdevs": 4, 00:26:12.970 "num_base_bdevs_discovered": 3, 00:26:12.970 "num_base_bdevs_operational": 4, 00:26:12.970 "base_bdevs_list": [ 00:26:12.970 { 00:26:12.970 "name": null, 00:26:12.970 "uuid": "d794eacc-ee5f-45e1-ba7a-55515f562290", 00:26:12.970 "is_configured": false, 00:26:12.970 "data_offset": 2048, 00:26:12.970 "data_size": 63488 00:26:12.970 }, 00:26:12.970 { 00:26:12.970 "name": "BaseBdev2", 00:26:12.970 "uuid": "97081252-936f-434c-87e0-6cac7bef0c3d", 00:26:12.970 "is_configured": true, 00:26:12.970 "data_offset": 2048, 00:26:12.970 "data_size": 63488 00:26:12.970 }, 00:26:12.970 { 00:26:12.970 "name": "BaseBdev3", 00:26:12.970 "uuid": "cbf9a261-49cd-46cd-9823-a821234e8c95", 00:26:12.970 "is_configured": true, 00:26:12.970 "data_offset": 2048, 00:26:12.970 "data_size": 63488 00:26:12.970 }, 00:26:12.970 { 00:26:12.970 "name": "BaseBdev4", 00:26:12.970 "uuid": "24808681-0705-4ec3-9f34-a5985f299e1b", 00:26:12.970 "is_configured": true, 00:26:12.970 "data_offset": 2048, 00:26:12.970 "data_size": 63488 00:26:12.970 } 00:26:12.970 ] 00:26:12.970 }' 00:26:12.970 04:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:12.970 04:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:13.535 04:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.535 04:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:13.792 04:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:26:13.792 04:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.792 04:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:14.050 04:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d794eacc-ee5f-45e1-ba7a-55515f562290 00:26:14.307 [2024-07-23 04:21:22.847557] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:14.307 [2024-07-23 04:21:22.847811] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:26:14.307 [2024-07-23 04:21:22.847838] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:14.307 [2024-07-23 04:21:22.848204] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:26:14.307 [2024-07-23 04:21:22.848440] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:26:14.307 [2024-07-23 04:21:22.848455] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000042080 00:26:14.307 NewBaseBdev 00:26:14.307 [2024-07-23 04:21:22.848637] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:14.307 04:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:26:14.307 04:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:26:14.308 04:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:14.308 04:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:14.308 04:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:14.308 04:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:14.308 04:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:14.565 04:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:14.565 [ 00:26:14.565 { 00:26:14.565 "name": "NewBaseBdev", 00:26:14.565 "aliases": [ 00:26:14.565 "d794eacc-ee5f-45e1-ba7a-55515f562290" 00:26:14.565 ], 00:26:14.565 "product_name": "Malloc disk", 00:26:14.565 "block_size": 512, 00:26:14.565 "num_blocks": 65536, 00:26:14.565 "uuid": "d794eacc-ee5f-45e1-ba7a-55515f562290", 00:26:14.565 "assigned_rate_limits": { 00:26:14.565 "rw_ios_per_sec": 0, 00:26:14.565 "rw_mbytes_per_sec": 0, 00:26:14.565 "r_mbytes_per_sec": 0, 00:26:14.565 "w_mbytes_per_sec": 0 00:26:14.565 }, 00:26:14.565 "claimed": true, 00:26:14.565 "claim_type": "exclusive_write", 00:26:14.565 "zoned": false, 00:26:14.565 "supported_io_types": { 00:26:14.565 "read": true, 00:26:14.565 "write": true, 00:26:14.565 "unmap": true, 00:26:14.565 "flush": true, 00:26:14.565 "reset": true, 00:26:14.565 "nvme_admin": false, 00:26:14.565 "nvme_io": false, 00:26:14.565 "nvme_io_md": false, 00:26:14.565 "write_zeroes": true, 00:26:14.565 "zcopy": true, 00:26:14.565 "get_zone_info": false, 00:26:14.565 "zone_management": false, 00:26:14.565 "zone_append": false, 00:26:14.565 "compare": false, 00:26:14.565 "compare_and_write": false, 00:26:14.565 "abort": true, 00:26:14.565 "seek_hole": false, 00:26:14.565 "seek_data": false, 00:26:14.565 "copy": true, 00:26:14.565 "nvme_iov_md": false 00:26:14.565 }, 00:26:14.566 "memory_domains": [ 00:26:14.566 { 00:26:14.566 "dma_device_id": "system", 00:26:14.566 "dma_device_type": 1 00:26:14.566 }, 00:26:14.566 { 00:26:14.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:14.566 "dma_device_type": 2 00:26:14.566 } 00:26:14.566 ], 00:26:14.566 "driver_specific": {} 00:26:14.566 } 00:26:14.566 ] 00:26:14.566 04:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:14.566 04:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:26:14.566 04:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:14.566 04:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:14.566 04:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:14.566 04:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:14.566 04:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:14.566 04:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:14.566 04:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:14.566 04:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:14.566 04:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:14.566 04:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.566 04:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:14.823 04:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:14.823 "name": "Existed_Raid", 00:26:14.823 "uuid": "8e771e62-44a0-4de3-a2e4-4ff7dadfd824", 00:26:14.823 "strip_size_kb": 0, 00:26:14.823 "state": "online", 00:26:14.823 "raid_level": "raid1", 00:26:14.823 "superblock": true, 00:26:14.823 "num_base_bdevs": 4, 00:26:14.823 "num_base_bdevs_discovered": 4, 00:26:14.823 "num_base_bdevs_operational": 4, 00:26:14.823 "base_bdevs_list": [ 00:26:14.823 { 00:26:14.823 "name": "NewBaseBdev", 00:26:14.823 "uuid": "d794eacc-ee5f-45e1-ba7a-55515f562290", 00:26:14.823 "is_configured": true, 00:26:14.823 "data_offset": 2048, 00:26:14.823 "data_size": 63488 00:26:14.823 }, 00:26:14.823 { 00:26:14.823 "name": "BaseBdev2", 00:26:14.823 "uuid": "97081252-936f-434c-87e0-6cac7bef0c3d", 00:26:14.823 "is_configured": true, 00:26:14.823 "data_offset": 2048, 00:26:14.823 "data_size": 63488 00:26:14.823 }, 00:26:14.823 { 00:26:14.823 "name": "BaseBdev3", 00:26:14.824 "uuid": "cbf9a261-49cd-46cd-9823-a821234e8c95", 00:26:14.824 "is_configured": true, 00:26:14.824 "data_offset": 2048, 00:26:14.824 "data_size": 63488 00:26:14.824 }, 00:26:14.824 { 00:26:14.824 "name": "BaseBdev4", 00:26:14.824 "uuid": "24808681-0705-4ec3-9f34-a5985f299e1b", 00:26:14.824 "is_configured": true, 00:26:14.824 "data_offset": 2048, 00:26:14.824 "data_size": 63488 00:26:14.824 } 00:26:14.824 ] 00:26:14.824 }' 00:26:14.824 04:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:14.824 04:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:15.389 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:26:15.389 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:15.389 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:15.389 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:15.389 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:15.389 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:26:15.389 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:15.389 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:15.647 [2024-07-23 04:21:24.336040] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:15.647 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:15.647 "name": "Existed_Raid", 00:26:15.647 "aliases": [ 00:26:15.647 "8e771e62-44a0-4de3-a2e4-4ff7dadfd824" 00:26:15.647 ], 00:26:15.647 "product_name": "Raid Volume", 00:26:15.647 "block_size": 512, 00:26:15.647 "num_blocks": 63488, 00:26:15.647 "uuid": "8e771e62-44a0-4de3-a2e4-4ff7dadfd824", 00:26:15.647 "assigned_rate_limits": { 00:26:15.647 "rw_ios_per_sec": 0, 00:26:15.647 "rw_mbytes_per_sec": 0, 00:26:15.647 "r_mbytes_per_sec": 0, 00:26:15.647 "w_mbytes_per_sec": 0 00:26:15.647 }, 00:26:15.647 "claimed": false, 00:26:15.647 "zoned": false, 00:26:15.647 "supported_io_types": { 00:26:15.647 "read": true, 00:26:15.647 "write": true, 00:26:15.647 "unmap": false, 00:26:15.647 "flush": false, 00:26:15.647 "reset": true, 00:26:15.647 "nvme_admin": false, 00:26:15.647 "nvme_io": false, 00:26:15.647 "nvme_io_md": false, 00:26:15.647 "write_zeroes": true, 00:26:15.647 "zcopy": false, 00:26:15.647 "get_zone_info": false, 00:26:15.647 "zone_management": false, 00:26:15.647 "zone_append": false, 00:26:15.647 "compare": false, 00:26:15.647 "compare_and_write": false, 00:26:15.647 "abort": false, 00:26:15.647 "seek_hole": false, 00:26:15.647 "seek_data": false, 00:26:15.647 "copy": false, 00:26:15.647 "nvme_iov_md": false 00:26:15.647 }, 00:26:15.647 "memory_domains": [ 00:26:15.647 { 00:26:15.647 "dma_device_id": "system", 00:26:15.647 "dma_device_type": 1 00:26:15.647 }, 00:26:15.647 { 00:26:15.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:15.647 "dma_device_type": 2 00:26:15.647 }, 00:26:15.647 { 00:26:15.647 "dma_device_id": "system", 00:26:15.647 "dma_device_type": 1 00:26:15.647 }, 00:26:15.647 { 00:26:15.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:15.647 "dma_device_type": 2 00:26:15.647 }, 00:26:15.647 { 00:26:15.647 "dma_device_id": "system", 00:26:15.647 "dma_device_type": 1 00:26:15.647 }, 00:26:15.647 { 00:26:15.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:15.647 "dma_device_type": 2 00:26:15.647 }, 00:26:15.647 { 00:26:15.647 "dma_device_id": "system", 00:26:15.647 "dma_device_type": 1 00:26:15.647 }, 00:26:15.647 { 00:26:15.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:15.647 "dma_device_type": 2 00:26:15.647 } 00:26:15.647 ], 00:26:15.647 "driver_specific": { 00:26:15.647 "raid": { 00:26:15.647 "uuid": "8e771e62-44a0-4de3-a2e4-4ff7dadfd824", 00:26:15.647 "strip_size_kb": 0, 00:26:15.647 "state": "online", 00:26:15.647 "raid_level": "raid1", 00:26:15.647 "superblock": true, 00:26:15.647 "num_base_bdevs": 4, 00:26:15.647 "num_base_bdevs_discovered": 4, 00:26:15.647 "num_base_bdevs_operational": 4, 00:26:15.647 "base_bdevs_list": [ 00:26:15.647 { 00:26:15.647 "name": "NewBaseBdev", 00:26:15.647 "uuid": "d794eacc-ee5f-45e1-ba7a-55515f562290", 00:26:15.647 "is_configured": true, 00:26:15.647 "data_offset": 2048, 00:26:15.647 "data_size": 63488 00:26:15.647 }, 00:26:15.647 { 00:26:15.647 "name": "BaseBdev2", 00:26:15.647 "uuid": "97081252-936f-434c-87e0-6cac7bef0c3d", 00:26:15.647 "is_configured": true, 00:26:15.647 "data_offset": 2048, 00:26:15.647 "data_size": 63488 00:26:15.647 }, 00:26:15.647 { 00:26:15.647 "name": "BaseBdev3", 00:26:15.647 "uuid": "cbf9a261-49cd-46cd-9823-a821234e8c95", 00:26:15.647 "is_configured": true, 00:26:15.647 "data_offset": 2048, 00:26:15.647 "data_size": 63488 00:26:15.647 }, 00:26:15.647 { 00:26:15.647 "name": "BaseBdev4", 00:26:15.647 "uuid": "24808681-0705-4ec3-9f34-a5985f299e1b", 00:26:15.647 "is_configured": true, 00:26:15.647 "data_offset": 2048, 00:26:15.647 "data_size": 63488 00:26:15.647 } 00:26:15.647 ] 00:26:15.647 } 00:26:15.647 } 00:26:15.647 }' 00:26:15.647 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:15.647 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:26:15.647 BaseBdev2 00:26:15.647 BaseBdev3 00:26:15.647 BaseBdev4' 00:26:15.647 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:15.647 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:26:15.647 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:15.905 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:15.905 "name": "NewBaseBdev", 00:26:15.905 "aliases": [ 00:26:15.905 "d794eacc-ee5f-45e1-ba7a-55515f562290" 00:26:15.905 ], 00:26:15.905 "product_name": "Malloc disk", 00:26:15.905 "block_size": 512, 00:26:15.905 "num_blocks": 65536, 00:26:15.906 "uuid": "d794eacc-ee5f-45e1-ba7a-55515f562290", 00:26:15.906 "assigned_rate_limits": { 00:26:15.906 "rw_ios_per_sec": 0, 00:26:15.906 "rw_mbytes_per_sec": 0, 00:26:15.906 "r_mbytes_per_sec": 0, 00:26:15.906 "w_mbytes_per_sec": 0 00:26:15.906 }, 00:26:15.906 "claimed": true, 00:26:15.906 "claim_type": "exclusive_write", 00:26:15.906 "zoned": false, 00:26:15.906 "supported_io_types": { 00:26:15.906 "read": true, 00:26:15.906 "write": true, 00:26:15.906 "unmap": true, 00:26:15.906 "flush": true, 00:26:15.906 "reset": true, 00:26:15.906 "nvme_admin": false, 00:26:15.906 "nvme_io": false, 00:26:15.906 "nvme_io_md": false, 00:26:15.906 "write_zeroes": true, 00:26:15.906 "zcopy": true, 00:26:15.906 "get_zone_info": false, 00:26:15.906 "zone_management": false, 00:26:15.906 "zone_append": false, 00:26:15.906 "compare": false, 00:26:15.906 "compare_and_write": false, 00:26:15.906 "abort": true, 00:26:15.906 "seek_hole": false, 00:26:15.906 "seek_data": false, 00:26:15.906 "copy": true, 00:26:15.906 "nvme_iov_md": false 00:26:15.906 }, 00:26:15.906 "memory_domains": [ 00:26:15.906 { 00:26:15.906 "dma_device_id": "system", 00:26:15.906 "dma_device_type": 1 00:26:15.906 }, 00:26:15.906 { 00:26:15.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:15.906 "dma_device_type": 2 00:26:15.906 } 00:26:15.906 ], 00:26:15.906 "driver_specific": {} 00:26:15.906 }' 00:26:15.906 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:15.906 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:16.163 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:16.163 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:16.163 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:16.163 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:16.163 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:16.163 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:16.164 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:16.164 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:16.164 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:16.422 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:16.422 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:16.422 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:16.422 04:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:16.679 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:16.679 "name": "BaseBdev2", 00:26:16.679 "aliases": [ 00:26:16.679 "97081252-936f-434c-87e0-6cac7bef0c3d" 00:26:16.679 ], 00:26:16.679 "product_name": "Malloc disk", 00:26:16.679 "block_size": 512, 00:26:16.679 "num_blocks": 65536, 00:26:16.679 "uuid": "97081252-936f-434c-87e0-6cac7bef0c3d", 00:26:16.679 "assigned_rate_limits": { 00:26:16.679 "rw_ios_per_sec": 0, 00:26:16.679 "rw_mbytes_per_sec": 0, 00:26:16.679 "r_mbytes_per_sec": 0, 00:26:16.679 "w_mbytes_per_sec": 0 00:26:16.679 }, 00:26:16.679 "claimed": true, 00:26:16.680 "claim_type": "exclusive_write", 00:26:16.680 "zoned": false, 00:26:16.680 "supported_io_types": { 00:26:16.680 "read": true, 00:26:16.680 "write": true, 00:26:16.680 "unmap": true, 00:26:16.680 "flush": true, 00:26:16.680 "reset": true, 00:26:16.680 "nvme_admin": false, 00:26:16.680 "nvme_io": false, 00:26:16.680 "nvme_io_md": false, 00:26:16.680 "write_zeroes": true, 00:26:16.680 "zcopy": true, 00:26:16.680 "get_zone_info": false, 00:26:16.680 "zone_management": false, 00:26:16.680 "zone_append": false, 00:26:16.680 "compare": false, 00:26:16.680 "compare_and_write": false, 00:26:16.680 "abort": true, 00:26:16.680 "seek_hole": false, 00:26:16.680 "seek_data": false, 00:26:16.680 "copy": true, 00:26:16.680 "nvme_iov_md": false 00:26:16.680 }, 00:26:16.680 "memory_domains": [ 00:26:16.680 { 00:26:16.680 "dma_device_id": "system", 00:26:16.680 "dma_device_type": 1 00:26:16.680 }, 00:26:16.680 { 00:26:16.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.680 "dma_device_type": 2 00:26:16.680 } 00:26:16.680 ], 00:26:16.680 "driver_specific": {} 00:26:16.680 }' 00:26:16.680 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:16.680 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:16.680 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:16.680 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:16.680 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:16.680 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:16.680 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:16.680 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:16.938 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:16.938 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:16.938 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:16.938 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:16.938 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:16.938 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:16.938 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:17.195 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:17.195 "name": "BaseBdev3", 00:26:17.195 "aliases": [ 00:26:17.195 "cbf9a261-49cd-46cd-9823-a821234e8c95" 00:26:17.195 ], 00:26:17.195 "product_name": "Malloc disk", 00:26:17.195 "block_size": 512, 00:26:17.195 "num_blocks": 65536, 00:26:17.195 "uuid": "cbf9a261-49cd-46cd-9823-a821234e8c95", 00:26:17.195 "assigned_rate_limits": { 00:26:17.195 "rw_ios_per_sec": 0, 00:26:17.195 "rw_mbytes_per_sec": 0, 00:26:17.195 "r_mbytes_per_sec": 0, 00:26:17.195 "w_mbytes_per_sec": 0 00:26:17.195 }, 00:26:17.195 "claimed": true, 00:26:17.195 "claim_type": "exclusive_write", 00:26:17.195 "zoned": false, 00:26:17.195 "supported_io_types": { 00:26:17.195 "read": true, 00:26:17.195 "write": true, 00:26:17.195 "unmap": true, 00:26:17.195 "flush": true, 00:26:17.195 "reset": true, 00:26:17.195 "nvme_admin": false, 00:26:17.195 "nvme_io": false, 00:26:17.195 "nvme_io_md": false, 00:26:17.195 "write_zeroes": true, 00:26:17.195 "zcopy": true, 00:26:17.195 "get_zone_info": false, 00:26:17.195 "zone_management": false, 00:26:17.195 "zone_append": false, 00:26:17.195 "compare": false, 00:26:17.195 "compare_and_write": false, 00:26:17.195 "abort": true, 00:26:17.195 "seek_hole": false, 00:26:17.195 "seek_data": false, 00:26:17.195 "copy": true, 00:26:17.195 "nvme_iov_md": false 00:26:17.195 }, 00:26:17.195 "memory_domains": [ 00:26:17.195 { 00:26:17.195 "dma_device_id": "system", 00:26:17.195 "dma_device_type": 1 00:26:17.195 }, 00:26:17.195 { 00:26:17.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.195 "dma_device_type": 2 00:26:17.195 } 00:26:17.195 ], 00:26:17.195 "driver_specific": {} 00:26:17.195 }' 00:26:17.195 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:17.195 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:17.195 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:17.195 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:17.195 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:17.195 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:17.195 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:17.195 04:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:17.453 04:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:17.453 04:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:17.453 04:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:17.453 04:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:17.453 04:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:17.453 04:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:17.453 04:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:18.018 04:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:18.018 "name": "BaseBdev4", 00:26:18.018 "aliases": [ 00:26:18.018 "24808681-0705-4ec3-9f34-a5985f299e1b" 00:26:18.018 ], 00:26:18.018 "product_name": "Malloc disk", 00:26:18.018 "block_size": 512, 00:26:18.018 "num_blocks": 65536, 00:26:18.018 "uuid": "24808681-0705-4ec3-9f34-a5985f299e1b", 00:26:18.018 "assigned_rate_limits": { 00:26:18.018 "rw_ios_per_sec": 0, 00:26:18.018 "rw_mbytes_per_sec": 0, 00:26:18.018 "r_mbytes_per_sec": 0, 00:26:18.018 "w_mbytes_per_sec": 0 00:26:18.018 }, 00:26:18.018 "claimed": true, 00:26:18.018 "claim_type": "exclusive_write", 00:26:18.018 "zoned": false, 00:26:18.018 "supported_io_types": { 00:26:18.018 "read": true, 00:26:18.018 "write": true, 00:26:18.018 "unmap": true, 00:26:18.018 "flush": true, 00:26:18.018 "reset": true, 00:26:18.018 "nvme_admin": false, 00:26:18.018 "nvme_io": false, 00:26:18.018 "nvme_io_md": false, 00:26:18.018 "write_zeroes": true, 00:26:18.018 "zcopy": true, 00:26:18.018 "get_zone_info": false, 00:26:18.018 "zone_management": false, 00:26:18.018 "zone_append": false, 00:26:18.018 "compare": false, 00:26:18.018 "compare_and_write": false, 00:26:18.018 "abort": true, 00:26:18.018 "seek_hole": false, 00:26:18.018 "seek_data": false, 00:26:18.018 "copy": true, 00:26:18.018 "nvme_iov_md": false 00:26:18.018 }, 00:26:18.018 "memory_domains": [ 00:26:18.018 { 00:26:18.018 "dma_device_id": "system", 00:26:18.018 "dma_device_type": 1 00:26:18.018 }, 00:26:18.018 { 00:26:18.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.018 "dma_device_type": 2 00:26:18.018 } 00:26:18.018 ], 00:26:18.018 "driver_specific": {} 00:26:18.018 }' 00:26:18.018 04:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.018 04:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.018 04:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:18.018 04:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.018 04:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.018 04:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:18.018 04:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.018 04:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.276 04:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:18.276 04:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.276 04:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.276 04:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:18.276 04:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:18.534 [2024-07-23 04:21:27.139247] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:18.534 [2024-07-23 04:21:27.139279] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:18.534 [2024-07-23 04:21:27.139361] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:18.534 [2024-07-23 04:21:27.139705] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:18.534 [2024-07-23 04:21:27.139726] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name Existed_Raid, state offline 00:26:18.534 04:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2744404 00:26:18.534 04:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2744404 ']' 00:26:18.534 04:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2744404 00:26:18.534 04:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:26:18.534 04:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:18.534 04:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2744404 00:26:18.534 04:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:18.534 04:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:18.534 04:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2744404' 00:26:18.534 killing process with pid 2744404 00:26:18.534 04:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2744404 00:26:18.534 [2024-07-23 04:21:27.216032] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:18.534 04:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2744404 00:26:19.099 [2024-07-23 04:21:27.672630] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:20.998 04:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:26:20.998 00:26:20.998 real 0m33.795s 00:26:20.998 user 0m59.274s 00:26:20.998 sys 0m5.744s 00:26:20.998 04:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:20.998 04:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:20.998 ************************************ 00:26:20.998 END TEST raid_state_function_test_sb 00:26:20.998 ************************************ 00:26:20.998 04:21:29 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:20.998 04:21:29 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:26:20.998 04:21:29 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:26:20.998 04:21:29 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:20.998 04:21:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:20.998 ************************************ 00:26:20.998 START TEST raid_superblock_test 00:26:20.998 ************************************ 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 4 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2751325 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2751325 /var/tmp/spdk-raid.sock 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2751325 ']' 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:20.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:20.998 04:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.998 [2024-07-23 04:21:29.566839] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:20.998 [2024-07-23 04:21:29.566960] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2751325 ] 00:26:20.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.998 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:20.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.998 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:20.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.998 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:20.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.998 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:20.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.998 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:20.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.998 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:20.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:20.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:20.999 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:21.256 [2024-07-23 04:21:29.791895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.514 [2024-07-23 04:21:30.080045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.771 [2024-07-23 04:21:30.424052] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:21.771 [2024-07-23 04:21:30.424088] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:22.028 04:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:22.028 04:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:26:22.028 04:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:26:22.028 04:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:22.029 04:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:26:22.029 04:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:26:22.029 04:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:22.029 04:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:22.029 04:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:22.029 04:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:22.029 04:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:26:22.286 malloc1 00:26:22.286 04:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:22.543 [2024-07-23 04:21:31.100774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:22.543 [2024-07-23 04:21:31.100841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:22.543 [2024-07-23 04:21:31.100873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:26:22.543 [2024-07-23 04:21:31.100890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:22.543 [2024-07-23 04:21:31.103706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:22.543 [2024-07-23 04:21:31.103744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:22.543 pt1 00:26:22.543 04:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:22.543 04:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:22.543 04:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:26:22.543 04:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:26:22.543 04:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:22.543 04:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:22.543 04:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:22.543 04:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:22.543 04:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:26:22.801 malloc2 00:26:22.801 04:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:23.058 [2024-07-23 04:21:31.610901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:23.058 [2024-07-23 04:21:31.610965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:23.058 [2024-07-23 04:21:31.610995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:26:23.058 [2024-07-23 04:21:31.611011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:23.058 [2024-07-23 04:21:31.613838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:23.058 [2024-07-23 04:21:31.613881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:23.058 pt2 00:26:23.058 04:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:23.058 04:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:23.058 04:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:26:23.058 04:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:26:23.058 04:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:23.058 04:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:23.058 04:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:23.058 04:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:23.058 04:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:26:23.315 malloc3 00:26:23.315 04:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:23.573 [2024-07-23 04:21:32.125551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:23.573 [2024-07-23 04:21:32.125619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:23.573 [2024-07-23 04:21:32.125651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:26:23.573 [2024-07-23 04:21:32.125667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:23.573 [2024-07-23 04:21:32.128457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:23.573 [2024-07-23 04:21:32.128494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:23.573 pt3 00:26:23.573 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:23.573 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:23.573 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:26:23.573 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:26:23.573 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:26:23.573 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:23.573 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:23.573 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:23.573 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:26:23.830 malloc4 00:26:23.830 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:24.089 [2024-07-23 04:21:32.627860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:24.089 [2024-07-23 04:21:32.627933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:24.089 [2024-07-23 04:21:32.627961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:26:24.089 [2024-07-23 04:21:32.627976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:24.089 [2024-07-23 04:21:32.630782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:24.089 [2024-07-23 04:21:32.630819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:24.089 pt4 00:26:24.089 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:24.089 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:24.089 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:26:24.089 [2024-07-23 04:21:32.856575] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:24.089 [2024-07-23 04:21:32.858936] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:24.089 [2024-07-23 04:21:32.859032] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:24.089 [2024-07-23 04:21:32.859089] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:24.089 [2024-07-23 04:21:32.859361] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:26:24.089 [2024-07-23 04:21:32.859382] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:24.089 [2024-07-23 04:21:32.859752] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:26:24.089 [2024-07-23 04:21:32.860029] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:26:24.089 [2024-07-23 04:21:32.860048] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042080 00:26:24.089 [2024-07-23 04:21:32.860254] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:24.347 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:24.347 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:24.347 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:24.347 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:24.347 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:24.347 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:24.347 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:24.347 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:24.347 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:24.347 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:24.347 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.347 04:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:24.347 04:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:24.347 "name": "raid_bdev1", 00:26:24.347 "uuid": "2efc66f7-4e61-4b49-ba12-0da79acd1663", 00:26:24.347 "strip_size_kb": 0, 00:26:24.347 "state": "online", 00:26:24.347 "raid_level": "raid1", 00:26:24.347 "superblock": true, 00:26:24.347 "num_base_bdevs": 4, 00:26:24.347 "num_base_bdevs_discovered": 4, 00:26:24.347 "num_base_bdevs_operational": 4, 00:26:24.347 "base_bdevs_list": [ 00:26:24.347 { 00:26:24.347 "name": "pt1", 00:26:24.347 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:24.347 "is_configured": true, 00:26:24.347 "data_offset": 2048, 00:26:24.347 "data_size": 63488 00:26:24.347 }, 00:26:24.347 { 00:26:24.347 "name": "pt2", 00:26:24.347 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:24.347 "is_configured": true, 00:26:24.347 "data_offset": 2048, 00:26:24.347 "data_size": 63488 00:26:24.347 }, 00:26:24.347 { 00:26:24.347 "name": "pt3", 00:26:24.347 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:24.347 "is_configured": true, 00:26:24.347 "data_offset": 2048, 00:26:24.347 "data_size": 63488 00:26:24.347 }, 00:26:24.347 { 00:26:24.347 "name": "pt4", 00:26:24.347 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:24.347 "is_configured": true, 00:26:24.347 "data_offset": 2048, 00:26:24.347 "data_size": 63488 00:26:24.347 } 00:26:24.347 ] 00:26:24.347 }' 00:26:24.347 04:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:24.347 04:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.913 04:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:26:24.913 04:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:24.913 04:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:24.913 04:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:24.913 04:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:24.913 04:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:24.913 04:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:24.913 04:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:25.170 [2024-07-23 04:21:33.871658] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:25.170 04:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:25.170 "name": "raid_bdev1", 00:26:25.170 "aliases": [ 00:26:25.170 "2efc66f7-4e61-4b49-ba12-0da79acd1663" 00:26:25.170 ], 00:26:25.170 "product_name": "Raid Volume", 00:26:25.170 "block_size": 512, 00:26:25.170 "num_blocks": 63488, 00:26:25.170 "uuid": "2efc66f7-4e61-4b49-ba12-0da79acd1663", 00:26:25.170 "assigned_rate_limits": { 00:26:25.170 "rw_ios_per_sec": 0, 00:26:25.170 "rw_mbytes_per_sec": 0, 00:26:25.170 "r_mbytes_per_sec": 0, 00:26:25.170 "w_mbytes_per_sec": 0 00:26:25.170 }, 00:26:25.170 "claimed": false, 00:26:25.170 "zoned": false, 00:26:25.170 "supported_io_types": { 00:26:25.170 "read": true, 00:26:25.170 "write": true, 00:26:25.170 "unmap": false, 00:26:25.170 "flush": false, 00:26:25.170 "reset": true, 00:26:25.170 "nvme_admin": false, 00:26:25.170 "nvme_io": false, 00:26:25.170 "nvme_io_md": false, 00:26:25.170 "write_zeroes": true, 00:26:25.170 "zcopy": false, 00:26:25.170 "get_zone_info": false, 00:26:25.170 "zone_management": false, 00:26:25.170 "zone_append": false, 00:26:25.170 "compare": false, 00:26:25.170 "compare_and_write": false, 00:26:25.170 "abort": false, 00:26:25.170 "seek_hole": false, 00:26:25.170 "seek_data": false, 00:26:25.170 "copy": false, 00:26:25.170 "nvme_iov_md": false 00:26:25.170 }, 00:26:25.170 "memory_domains": [ 00:26:25.170 { 00:26:25.170 "dma_device_id": "system", 00:26:25.170 "dma_device_type": 1 00:26:25.170 }, 00:26:25.170 { 00:26:25.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:25.170 "dma_device_type": 2 00:26:25.170 }, 00:26:25.170 { 00:26:25.170 "dma_device_id": "system", 00:26:25.170 "dma_device_type": 1 00:26:25.170 }, 00:26:25.170 { 00:26:25.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:25.170 "dma_device_type": 2 00:26:25.170 }, 00:26:25.170 { 00:26:25.170 "dma_device_id": "system", 00:26:25.170 "dma_device_type": 1 00:26:25.170 }, 00:26:25.170 { 00:26:25.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:25.170 "dma_device_type": 2 00:26:25.170 }, 00:26:25.170 { 00:26:25.170 "dma_device_id": "system", 00:26:25.170 "dma_device_type": 1 00:26:25.170 }, 00:26:25.170 { 00:26:25.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:25.170 "dma_device_type": 2 00:26:25.170 } 00:26:25.170 ], 00:26:25.170 "driver_specific": { 00:26:25.170 "raid": { 00:26:25.170 "uuid": "2efc66f7-4e61-4b49-ba12-0da79acd1663", 00:26:25.170 "strip_size_kb": 0, 00:26:25.170 "state": "online", 00:26:25.170 "raid_level": "raid1", 00:26:25.170 "superblock": true, 00:26:25.170 "num_base_bdevs": 4, 00:26:25.170 "num_base_bdevs_discovered": 4, 00:26:25.170 "num_base_bdevs_operational": 4, 00:26:25.170 "base_bdevs_list": [ 00:26:25.170 { 00:26:25.170 "name": "pt1", 00:26:25.170 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:25.170 "is_configured": true, 00:26:25.170 "data_offset": 2048, 00:26:25.170 "data_size": 63488 00:26:25.170 }, 00:26:25.170 { 00:26:25.170 "name": "pt2", 00:26:25.170 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:25.170 "is_configured": true, 00:26:25.170 "data_offset": 2048, 00:26:25.170 "data_size": 63488 00:26:25.170 }, 00:26:25.170 { 00:26:25.170 "name": "pt3", 00:26:25.170 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:25.170 "is_configured": true, 00:26:25.170 "data_offset": 2048, 00:26:25.170 "data_size": 63488 00:26:25.170 }, 00:26:25.170 { 00:26:25.171 "name": "pt4", 00:26:25.171 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:25.171 "is_configured": true, 00:26:25.171 "data_offset": 2048, 00:26:25.171 "data_size": 63488 00:26:25.171 } 00:26:25.171 ] 00:26:25.171 } 00:26:25.171 } 00:26:25.171 }' 00:26:25.171 04:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:25.171 04:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:25.171 pt2 00:26:25.171 pt3 00:26:25.171 pt4' 00:26:25.171 04:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:25.171 04:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:25.171 04:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:25.429 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:25.429 "name": "pt1", 00:26:25.429 "aliases": [ 00:26:25.429 "00000000-0000-0000-0000-000000000001" 00:26:25.429 ], 00:26:25.429 "product_name": "passthru", 00:26:25.429 "block_size": 512, 00:26:25.429 "num_blocks": 65536, 00:26:25.429 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:25.429 "assigned_rate_limits": { 00:26:25.429 "rw_ios_per_sec": 0, 00:26:25.429 "rw_mbytes_per_sec": 0, 00:26:25.429 "r_mbytes_per_sec": 0, 00:26:25.429 "w_mbytes_per_sec": 0 00:26:25.429 }, 00:26:25.429 "claimed": true, 00:26:25.429 "claim_type": "exclusive_write", 00:26:25.429 "zoned": false, 00:26:25.429 "supported_io_types": { 00:26:25.429 "read": true, 00:26:25.429 "write": true, 00:26:25.429 "unmap": true, 00:26:25.429 "flush": true, 00:26:25.429 "reset": true, 00:26:25.429 "nvme_admin": false, 00:26:25.429 "nvme_io": false, 00:26:25.429 "nvme_io_md": false, 00:26:25.429 "write_zeroes": true, 00:26:25.429 "zcopy": true, 00:26:25.429 "get_zone_info": false, 00:26:25.429 "zone_management": false, 00:26:25.429 "zone_append": false, 00:26:25.429 "compare": false, 00:26:25.429 "compare_and_write": false, 00:26:25.429 "abort": true, 00:26:25.429 "seek_hole": false, 00:26:25.429 "seek_data": false, 00:26:25.429 "copy": true, 00:26:25.429 "nvme_iov_md": false 00:26:25.429 }, 00:26:25.429 "memory_domains": [ 00:26:25.429 { 00:26:25.429 "dma_device_id": "system", 00:26:25.429 "dma_device_type": 1 00:26:25.429 }, 00:26:25.429 { 00:26:25.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:25.429 "dma_device_type": 2 00:26:25.429 } 00:26:25.429 ], 00:26:25.429 "driver_specific": { 00:26:25.429 "passthru": { 00:26:25.429 "name": "pt1", 00:26:25.429 "base_bdev_name": "malloc1" 00:26:25.429 } 00:26:25.429 } 00:26:25.429 }' 00:26:25.429 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:25.687 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:25.687 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:25.687 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:25.687 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:25.687 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:25.687 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:25.687 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:25.687 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:25.687 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:25.687 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:25.945 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:25.945 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:25.945 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:25.945 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:26.203 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:26.203 "name": "pt2", 00:26:26.203 "aliases": [ 00:26:26.203 "00000000-0000-0000-0000-000000000002" 00:26:26.203 ], 00:26:26.203 "product_name": "passthru", 00:26:26.203 "block_size": 512, 00:26:26.203 "num_blocks": 65536, 00:26:26.203 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:26.203 "assigned_rate_limits": { 00:26:26.203 "rw_ios_per_sec": 0, 00:26:26.203 "rw_mbytes_per_sec": 0, 00:26:26.203 "r_mbytes_per_sec": 0, 00:26:26.203 "w_mbytes_per_sec": 0 00:26:26.203 }, 00:26:26.203 "claimed": true, 00:26:26.203 "claim_type": "exclusive_write", 00:26:26.203 "zoned": false, 00:26:26.203 "supported_io_types": { 00:26:26.203 "read": true, 00:26:26.203 "write": true, 00:26:26.203 "unmap": true, 00:26:26.203 "flush": true, 00:26:26.203 "reset": true, 00:26:26.203 "nvme_admin": false, 00:26:26.203 "nvme_io": false, 00:26:26.203 "nvme_io_md": false, 00:26:26.203 "write_zeroes": true, 00:26:26.203 "zcopy": true, 00:26:26.203 "get_zone_info": false, 00:26:26.203 "zone_management": false, 00:26:26.203 "zone_append": false, 00:26:26.203 "compare": false, 00:26:26.203 "compare_and_write": false, 00:26:26.203 "abort": true, 00:26:26.203 "seek_hole": false, 00:26:26.203 "seek_data": false, 00:26:26.204 "copy": true, 00:26:26.204 "nvme_iov_md": false 00:26:26.204 }, 00:26:26.204 "memory_domains": [ 00:26:26.204 { 00:26:26.204 "dma_device_id": "system", 00:26:26.204 "dma_device_type": 1 00:26:26.204 }, 00:26:26.204 { 00:26:26.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:26.204 "dma_device_type": 2 00:26:26.204 } 00:26:26.204 ], 00:26:26.204 "driver_specific": { 00:26:26.204 "passthru": { 00:26:26.204 "name": "pt2", 00:26:26.204 "base_bdev_name": "malloc2" 00:26:26.204 } 00:26:26.204 } 00:26:26.204 }' 00:26:26.204 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:26.204 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:26.204 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:26.204 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:26.204 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:26.204 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:26.204 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:26.204 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:26.462 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:26.462 04:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:26.462 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:26.462 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:26.462 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:26.462 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:26:26.462 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:26.720 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:26.720 "name": "pt3", 00:26:26.720 "aliases": [ 00:26:26.720 "00000000-0000-0000-0000-000000000003" 00:26:26.720 ], 00:26:26.720 "product_name": "passthru", 00:26:26.720 "block_size": 512, 00:26:26.720 "num_blocks": 65536, 00:26:26.720 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:26.720 "assigned_rate_limits": { 00:26:26.720 "rw_ios_per_sec": 0, 00:26:26.720 "rw_mbytes_per_sec": 0, 00:26:26.720 "r_mbytes_per_sec": 0, 00:26:26.720 "w_mbytes_per_sec": 0 00:26:26.720 }, 00:26:26.720 "claimed": true, 00:26:26.720 "claim_type": "exclusive_write", 00:26:26.720 "zoned": false, 00:26:26.720 "supported_io_types": { 00:26:26.720 "read": true, 00:26:26.720 "write": true, 00:26:26.720 "unmap": true, 00:26:26.720 "flush": true, 00:26:26.720 "reset": true, 00:26:26.720 "nvme_admin": false, 00:26:26.720 "nvme_io": false, 00:26:26.720 "nvme_io_md": false, 00:26:26.720 "write_zeroes": true, 00:26:26.720 "zcopy": true, 00:26:26.720 "get_zone_info": false, 00:26:26.720 "zone_management": false, 00:26:26.720 "zone_append": false, 00:26:26.720 "compare": false, 00:26:26.720 "compare_and_write": false, 00:26:26.720 "abort": true, 00:26:26.720 "seek_hole": false, 00:26:26.720 "seek_data": false, 00:26:26.720 "copy": true, 00:26:26.720 "nvme_iov_md": false 00:26:26.720 }, 00:26:26.720 "memory_domains": [ 00:26:26.720 { 00:26:26.720 "dma_device_id": "system", 00:26:26.720 "dma_device_type": 1 00:26:26.720 }, 00:26:26.720 { 00:26:26.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:26.720 "dma_device_type": 2 00:26:26.720 } 00:26:26.720 ], 00:26:26.720 "driver_specific": { 00:26:26.720 "passthru": { 00:26:26.720 "name": "pt3", 00:26:26.720 "base_bdev_name": "malloc3" 00:26:26.720 } 00:26:26.720 } 00:26:26.720 }' 00:26:26.720 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:26.720 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:26.720 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:26.720 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:26.720 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:26.720 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:26.720 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:26.720 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:26.979 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:26.979 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:26.979 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:26.979 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:26.979 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:26.979 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:26:26.979 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:27.238 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:27.238 "name": "pt4", 00:26:27.238 "aliases": [ 00:26:27.238 "00000000-0000-0000-0000-000000000004" 00:26:27.238 ], 00:26:27.238 "product_name": "passthru", 00:26:27.238 "block_size": 512, 00:26:27.238 "num_blocks": 65536, 00:26:27.238 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:27.238 "assigned_rate_limits": { 00:26:27.238 "rw_ios_per_sec": 0, 00:26:27.238 "rw_mbytes_per_sec": 0, 00:26:27.238 "r_mbytes_per_sec": 0, 00:26:27.238 "w_mbytes_per_sec": 0 00:26:27.238 }, 00:26:27.238 "claimed": true, 00:26:27.238 "claim_type": "exclusive_write", 00:26:27.238 "zoned": false, 00:26:27.238 "supported_io_types": { 00:26:27.238 "read": true, 00:26:27.238 "write": true, 00:26:27.238 "unmap": true, 00:26:27.238 "flush": true, 00:26:27.238 "reset": true, 00:26:27.238 "nvme_admin": false, 00:26:27.238 "nvme_io": false, 00:26:27.238 "nvme_io_md": false, 00:26:27.238 "write_zeroes": true, 00:26:27.238 "zcopy": true, 00:26:27.238 "get_zone_info": false, 00:26:27.238 "zone_management": false, 00:26:27.238 "zone_append": false, 00:26:27.238 "compare": false, 00:26:27.238 "compare_and_write": false, 00:26:27.238 "abort": true, 00:26:27.238 "seek_hole": false, 00:26:27.238 "seek_data": false, 00:26:27.238 "copy": true, 00:26:27.238 "nvme_iov_md": false 00:26:27.238 }, 00:26:27.238 "memory_domains": [ 00:26:27.238 { 00:26:27.238 "dma_device_id": "system", 00:26:27.238 "dma_device_type": 1 00:26:27.238 }, 00:26:27.238 { 00:26:27.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:27.238 "dma_device_type": 2 00:26:27.238 } 00:26:27.238 ], 00:26:27.238 "driver_specific": { 00:26:27.238 "passthru": { 00:26:27.238 "name": "pt4", 00:26:27.238 "base_bdev_name": "malloc4" 00:26:27.238 } 00:26:27.238 } 00:26:27.238 }' 00:26:27.238 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:27.238 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:27.238 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:27.238 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:27.238 04:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:27.238 04:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:27.238 04:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:27.496 04:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:27.496 04:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:27.496 04:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:27.496 04:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:27.496 04:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:27.496 04:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:27.496 04:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:26:27.755 [2024-07-23 04:21:36.362401] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:27.755 04:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=2efc66f7-4e61-4b49-ba12-0da79acd1663 00:26:27.755 04:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 2efc66f7-4e61-4b49-ba12-0da79acd1663 ']' 00:26:27.755 04:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:28.014 [2024-07-23 04:21:36.590603] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:28.014 [2024-07-23 04:21:36.590636] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:28.014 [2024-07-23 04:21:36.590729] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:28.014 [2024-07-23 04:21:36.590833] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:28.014 [2024-07-23 04:21:36.590857] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name raid_bdev1, state offline 00:26:28.014 04:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.014 04:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:26:28.272 04:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:26:28.272 04:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:26:28.272 04:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:28.272 04:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:28.272 04:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:28.272 04:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:28.531 04:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:28.531 04:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:28.789 04:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:28.789 04:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:29.048 04:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:29.048 04:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:29.306 04:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:26:29.306 04:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:29.306 04:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:26:29.306 04:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:29.306 04:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:29.306 04:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:29.306 04:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:29.307 04:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:29.307 04:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:29.307 04:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:29.307 04:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:29.307 04:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:26:29.307 04:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:29.565 [2024-07-23 04:21:38.158772] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:29.565 [2024-07-23 04:21:38.161111] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:29.565 [2024-07-23 04:21:38.161182] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:29.565 [2024-07-23 04:21:38.161230] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:26:29.565 [2024-07-23 04:21:38.161289] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:29.565 [2024-07-23 04:21:38.161345] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:29.565 [2024-07-23 04:21:38.161373] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:26:29.565 [2024-07-23 04:21:38.161404] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:26:29.565 [2024-07-23 04:21:38.161426] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:29.565 [2024-07-23 04:21:38.161444] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state configuring 00:26:29.565 request: 00:26:29.565 { 00:26:29.565 "name": "raid_bdev1", 00:26:29.565 "raid_level": "raid1", 00:26:29.565 "base_bdevs": [ 00:26:29.566 "malloc1", 00:26:29.566 "malloc2", 00:26:29.566 "malloc3", 00:26:29.566 "malloc4" 00:26:29.566 ], 00:26:29.566 "superblock": false, 00:26:29.566 "method": "bdev_raid_create", 00:26:29.566 "req_id": 1 00:26:29.566 } 00:26:29.566 Got JSON-RPC error response 00:26:29.566 response: 00:26:29.566 { 00:26:29.566 "code": -17, 00:26:29.566 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:29.566 } 00:26:29.566 04:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:26:29.566 04:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:29.566 04:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:29.566 04:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:29.566 04:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.566 04:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:26:29.824 04:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:26:29.824 04:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:26:29.824 04:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:30.083 [2024-07-23 04:21:38.611902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:30.083 [2024-07-23 04:21:38.611973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:30.083 [2024-07-23 04:21:38.611998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:26:30.083 [2024-07-23 04:21:38.612016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:30.083 [2024-07-23 04:21:38.614874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:30.083 [2024-07-23 04:21:38.614915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:30.083 [2024-07-23 04:21:38.615027] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:30.083 [2024-07-23 04:21:38.615106] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:30.083 pt1 00:26:30.083 04:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:26:30.083 04:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:30.083 04:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:30.083 04:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:30.083 04:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:30.083 04:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:30.083 04:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:30.083 04:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:30.083 04:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:30.083 04:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:30.083 04:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.083 04:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:30.083 04:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:30.083 "name": "raid_bdev1", 00:26:30.083 "uuid": "2efc66f7-4e61-4b49-ba12-0da79acd1663", 00:26:30.083 "strip_size_kb": 0, 00:26:30.083 "state": "configuring", 00:26:30.083 "raid_level": "raid1", 00:26:30.083 "superblock": true, 00:26:30.083 "num_base_bdevs": 4, 00:26:30.083 "num_base_bdevs_discovered": 1, 00:26:30.083 "num_base_bdevs_operational": 4, 00:26:30.083 "base_bdevs_list": [ 00:26:30.083 { 00:26:30.083 "name": "pt1", 00:26:30.083 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:30.083 "is_configured": true, 00:26:30.083 "data_offset": 2048, 00:26:30.083 "data_size": 63488 00:26:30.083 }, 00:26:30.083 { 00:26:30.083 "name": null, 00:26:30.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:30.083 "is_configured": false, 00:26:30.083 "data_offset": 2048, 00:26:30.083 "data_size": 63488 00:26:30.083 }, 00:26:30.083 { 00:26:30.083 "name": null, 00:26:30.083 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:30.083 "is_configured": false, 00:26:30.083 "data_offset": 2048, 00:26:30.083 "data_size": 63488 00:26:30.083 }, 00:26:30.083 { 00:26:30.083 "name": null, 00:26:30.083 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:30.083 "is_configured": false, 00:26:30.083 "data_offset": 2048, 00:26:30.083 "data_size": 63488 00:26:30.083 } 00:26:30.083 ] 00:26:30.083 }' 00:26:30.083 04:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:30.083 04:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.651 04:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:26:30.651 04:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:30.910 [2024-07-23 04:21:39.506602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:30.910 [2024-07-23 04:21:39.506674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:30.910 [2024-07-23 04:21:39.506700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043580 00:26:30.910 [2024-07-23 04:21:39.506718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:30.910 [2024-07-23 04:21:39.507320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:30.910 [2024-07-23 04:21:39.507351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:30.910 [2024-07-23 04:21:39.507442] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:30.910 [2024-07-23 04:21:39.507478] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:30.910 pt2 00:26:30.910 04:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:31.168 [2024-07-23 04:21:39.735259] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:31.168 04:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:26:31.168 04:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:31.168 04:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:31.168 04:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:31.168 04:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:31.168 04:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:31.168 04:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:31.168 04:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:31.168 04:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:31.169 04:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:31.169 04:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.169 04:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:31.427 04:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:31.427 "name": "raid_bdev1", 00:26:31.427 "uuid": "2efc66f7-4e61-4b49-ba12-0da79acd1663", 00:26:31.427 "strip_size_kb": 0, 00:26:31.427 "state": "configuring", 00:26:31.427 "raid_level": "raid1", 00:26:31.427 "superblock": true, 00:26:31.427 "num_base_bdevs": 4, 00:26:31.427 "num_base_bdevs_discovered": 1, 00:26:31.427 "num_base_bdevs_operational": 4, 00:26:31.427 "base_bdevs_list": [ 00:26:31.427 { 00:26:31.427 "name": "pt1", 00:26:31.427 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:31.427 "is_configured": true, 00:26:31.427 "data_offset": 2048, 00:26:31.427 "data_size": 63488 00:26:31.427 }, 00:26:31.427 { 00:26:31.427 "name": null, 00:26:31.427 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:31.427 "is_configured": false, 00:26:31.427 "data_offset": 2048, 00:26:31.427 "data_size": 63488 00:26:31.427 }, 00:26:31.427 { 00:26:31.427 "name": null, 00:26:31.427 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:31.427 "is_configured": false, 00:26:31.427 "data_offset": 2048, 00:26:31.427 "data_size": 63488 00:26:31.427 }, 00:26:31.427 { 00:26:31.427 "name": null, 00:26:31.427 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:31.427 "is_configured": false, 00:26:31.427 "data_offset": 2048, 00:26:31.427 "data_size": 63488 00:26:31.427 } 00:26:31.427 ] 00:26:31.427 }' 00:26:31.427 04:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:31.427 04:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.994 04:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:26:31.994 04:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:31.994 04:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:31.994 [2024-07-23 04:21:40.745941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:31.994 [2024-07-23 04:21:40.746011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:31.994 [2024-07-23 04:21:40.746045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043880 00:26:31.994 [2024-07-23 04:21:40.746061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:31.994 [2024-07-23 04:21:40.746668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:31.994 [2024-07-23 04:21:40.746693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:31.994 [2024-07-23 04:21:40.746805] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:31.994 [2024-07-23 04:21:40.746836] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:31.994 pt2 00:26:31.994 04:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:26:31.994 04:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:31.994 04:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:32.253 [2024-07-23 04:21:40.970550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:32.253 [2024-07-23 04:21:40.970603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:32.253 [2024-07-23 04:21:40.970631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043b80 00:26:32.253 [2024-07-23 04:21:40.970646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:32.253 [2024-07-23 04:21:40.971232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:32.253 [2024-07-23 04:21:40.971259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:32.253 [2024-07-23 04:21:40.971346] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:32.253 [2024-07-23 04:21:40.971374] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:32.253 pt3 00:26:32.253 04:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:26:32.253 04:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:32.253 04:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:32.511 [2024-07-23 04:21:41.195177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:32.511 [2024-07-23 04:21:41.195224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:32.512 [2024-07-23 04:21:41.195247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:26:32.512 [2024-07-23 04:21:41.195262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:32.512 [2024-07-23 04:21:41.195709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:32.512 [2024-07-23 04:21:41.195733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:32.512 [2024-07-23 04:21:41.195811] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:32.512 [2024-07-23 04:21:41.195843] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:32.512 [2024-07-23 04:21:41.196046] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:26:32.512 [2024-07-23 04:21:41.196060] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:32.512 [2024-07-23 04:21:41.196417] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:26:32.512 [2024-07-23 04:21:41.196665] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:26:32.512 [2024-07-23 04:21:41.196685] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:26:32.512 [2024-07-23 04:21:41.196871] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:32.512 pt4 00:26:32.512 04:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:26:32.512 04:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:32.512 04:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:32.512 04:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:32.512 04:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:32.512 04:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:32.512 04:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:32.512 04:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:32.512 04:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:32.512 04:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:32.512 04:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:32.512 04:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:32.512 04:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.512 04:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.771 04:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:32.771 "name": "raid_bdev1", 00:26:32.771 "uuid": "2efc66f7-4e61-4b49-ba12-0da79acd1663", 00:26:32.771 "strip_size_kb": 0, 00:26:32.771 "state": "online", 00:26:32.771 "raid_level": "raid1", 00:26:32.771 "superblock": true, 00:26:32.771 "num_base_bdevs": 4, 00:26:32.771 "num_base_bdevs_discovered": 4, 00:26:32.771 "num_base_bdevs_operational": 4, 00:26:32.771 "base_bdevs_list": [ 00:26:32.771 { 00:26:32.771 "name": "pt1", 00:26:32.771 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:32.771 "is_configured": true, 00:26:32.771 "data_offset": 2048, 00:26:32.771 "data_size": 63488 00:26:32.771 }, 00:26:32.771 { 00:26:32.771 "name": "pt2", 00:26:32.771 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:32.771 "is_configured": true, 00:26:32.771 "data_offset": 2048, 00:26:32.771 "data_size": 63488 00:26:32.771 }, 00:26:32.771 { 00:26:32.771 "name": "pt3", 00:26:32.771 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:32.771 "is_configured": true, 00:26:32.771 "data_offset": 2048, 00:26:32.771 "data_size": 63488 00:26:32.771 }, 00:26:32.771 { 00:26:32.771 "name": "pt4", 00:26:32.771 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:32.771 "is_configured": true, 00:26:32.771 "data_offset": 2048, 00:26:32.771 "data_size": 63488 00:26:32.771 } 00:26:32.771 ] 00:26:32.771 }' 00:26:32.771 04:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:32.771 04:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.338 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:26:33.338 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:33.338 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:33.338 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:33.338 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:33.338 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:33.338 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:33.338 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:33.597 [2024-07-23 04:21:42.254475] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:33.597 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:33.597 "name": "raid_bdev1", 00:26:33.597 "aliases": [ 00:26:33.597 "2efc66f7-4e61-4b49-ba12-0da79acd1663" 00:26:33.597 ], 00:26:33.597 "product_name": "Raid Volume", 00:26:33.597 "block_size": 512, 00:26:33.597 "num_blocks": 63488, 00:26:33.597 "uuid": "2efc66f7-4e61-4b49-ba12-0da79acd1663", 00:26:33.597 "assigned_rate_limits": { 00:26:33.597 "rw_ios_per_sec": 0, 00:26:33.597 "rw_mbytes_per_sec": 0, 00:26:33.597 "r_mbytes_per_sec": 0, 00:26:33.597 "w_mbytes_per_sec": 0 00:26:33.597 }, 00:26:33.597 "claimed": false, 00:26:33.597 "zoned": false, 00:26:33.597 "supported_io_types": { 00:26:33.597 "read": true, 00:26:33.597 "write": true, 00:26:33.597 "unmap": false, 00:26:33.597 "flush": false, 00:26:33.597 "reset": true, 00:26:33.597 "nvme_admin": false, 00:26:33.597 "nvme_io": false, 00:26:33.597 "nvme_io_md": false, 00:26:33.597 "write_zeroes": true, 00:26:33.597 "zcopy": false, 00:26:33.597 "get_zone_info": false, 00:26:33.597 "zone_management": false, 00:26:33.597 "zone_append": false, 00:26:33.597 "compare": false, 00:26:33.597 "compare_and_write": false, 00:26:33.597 "abort": false, 00:26:33.597 "seek_hole": false, 00:26:33.597 "seek_data": false, 00:26:33.597 "copy": false, 00:26:33.597 "nvme_iov_md": false 00:26:33.597 }, 00:26:33.597 "memory_domains": [ 00:26:33.597 { 00:26:33.597 "dma_device_id": "system", 00:26:33.597 "dma_device_type": 1 00:26:33.597 }, 00:26:33.597 { 00:26:33.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:33.597 "dma_device_type": 2 00:26:33.597 }, 00:26:33.597 { 00:26:33.597 "dma_device_id": "system", 00:26:33.597 "dma_device_type": 1 00:26:33.597 }, 00:26:33.597 { 00:26:33.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:33.597 "dma_device_type": 2 00:26:33.597 }, 00:26:33.597 { 00:26:33.597 "dma_device_id": "system", 00:26:33.597 "dma_device_type": 1 00:26:33.597 }, 00:26:33.597 { 00:26:33.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:33.597 "dma_device_type": 2 00:26:33.597 }, 00:26:33.597 { 00:26:33.597 "dma_device_id": "system", 00:26:33.597 "dma_device_type": 1 00:26:33.597 }, 00:26:33.597 { 00:26:33.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:33.597 "dma_device_type": 2 00:26:33.597 } 00:26:33.597 ], 00:26:33.597 "driver_specific": { 00:26:33.597 "raid": { 00:26:33.597 "uuid": "2efc66f7-4e61-4b49-ba12-0da79acd1663", 00:26:33.597 "strip_size_kb": 0, 00:26:33.597 "state": "online", 00:26:33.597 "raid_level": "raid1", 00:26:33.597 "superblock": true, 00:26:33.597 "num_base_bdevs": 4, 00:26:33.597 "num_base_bdevs_discovered": 4, 00:26:33.597 "num_base_bdevs_operational": 4, 00:26:33.597 "base_bdevs_list": [ 00:26:33.597 { 00:26:33.597 "name": "pt1", 00:26:33.597 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:33.597 "is_configured": true, 00:26:33.597 "data_offset": 2048, 00:26:33.597 "data_size": 63488 00:26:33.597 }, 00:26:33.597 { 00:26:33.597 "name": "pt2", 00:26:33.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:33.597 "is_configured": true, 00:26:33.597 "data_offset": 2048, 00:26:33.597 "data_size": 63488 00:26:33.597 }, 00:26:33.597 { 00:26:33.597 "name": "pt3", 00:26:33.597 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:33.597 "is_configured": true, 00:26:33.597 "data_offset": 2048, 00:26:33.597 "data_size": 63488 00:26:33.597 }, 00:26:33.597 { 00:26:33.597 "name": "pt4", 00:26:33.597 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:33.597 "is_configured": true, 00:26:33.597 "data_offset": 2048, 00:26:33.597 "data_size": 63488 00:26:33.597 } 00:26:33.597 ] 00:26:33.597 } 00:26:33.597 } 00:26:33.597 }' 00:26:33.597 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:33.597 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:33.597 pt2 00:26:33.597 pt3 00:26:33.597 pt4' 00:26:33.597 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:33.597 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:33.597 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:33.856 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:33.856 "name": "pt1", 00:26:33.856 "aliases": [ 00:26:33.856 "00000000-0000-0000-0000-000000000001" 00:26:33.856 ], 00:26:33.856 "product_name": "passthru", 00:26:33.856 "block_size": 512, 00:26:33.856 "num_blocks": 65536, 00:26:33.856 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:33.856 "assigned_rate_limits": { 00:26:33.856 "rw_ios_per_sec": 0, 00:26:33.856 "rw_mbytes_per_sec": 0, 00:26:33.856 "r_mbytes_per_sec": 0, 00:26:33.856 "w_mbytes_per_sec": 0 00:26:33.856 }, 00:26:33.856 "claimed": true, 00:26:33.856 "claim_type": "exclusive_write", 00:26:33.856 "zoned": false, 00:26:33.856 "supported_io_types": { 00:26:33.856 "read": true, 00:26:33.856 "write": true, 00:26:33.856 "unmap": true, 00:26:33.856 "flush": true, 00:26:33.856 "reset": true, 00:26:33.856 "nvme_admin": false, 00:26:33.856 "nvme_io": false, 00:26:33.856 "nvme_io_md": false, 00:26:33.856 "write_zeroes": true, 00:26:33.856 "zcopy": true, 00:26:33.856 "get_zone_info": false, 00:26:33.856 "zone_management": false, 00:26:33.856 "zone_append": false, 00:26:33.856 "compare": false, 00:26:33.856 "compare_and_write": false, 00:26:33.856 "abort": true, 00:26:33.856 "seek_hole": false, 00:26:33.856 "seek_data": false, 00:26:33.856 "copy": true, 00:26:33.856 "nvme_iov_md": false 00:26:33.856 }, 00:26:33.856 "memory_domains": [ 00:26:33.856 { 00:26:33.856 "dma_device_id": "system", 00:26:33.856 "dma_device_type": 1 00:26:33.856 }, 00:26:33.856 { 00:26:33.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:33.856 "dma_device_type": 2 00:26:33.856 } 00:26:33.856 ], 00:26:33.856 "driver_specific": { 00:26:33.856 "passthru": { 00:26:33.856 "name": "pt1", 00:26:33.856 "base_bdev_name": "malloc1" 00:26:33.856 } 00:26:33.856 } 00:26:33.856 }' 00:26:33.856 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:33.856 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:33.856 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:33.856 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:34.114 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:34.114 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:34.114 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:34.114 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:34.114 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:34.114 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:34.114 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:34.114 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:34.114 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:34.114 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:34.114 04:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:34.372 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:34.372 "name": "pt2", 00:26:34.373 "aliases": [ 00:26:34.373 "00000000-0000-0000-0000-000000000002" 00:26:34.373 ], 00:26:34.373 "product_name": "passthru", 00:26:34.373 "block_size": 512, 00:26:34.373 "num_blocks": 65536, 00:26:34.373 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:34.373 "assigned_rate_limits": { 00:26:34.373 "rw_ios_per_sec": 0, 00:26:34.373 "rw_mbytes_per_sec": 0, 00:26:34.373 "r_mbytes_per_sec": 0, 00:26:34.373 "w_mbytes_per_sec": 0 00:26:34.373 }, 00:26:34.373 "claimed": true, 00:26:34.373 "claim_type": "exclusive_write", 00:26:34.373 "zoned": false, 00:26:34.373 "supported_io_types": { 00:26:34.373 "read": true, 00:26:34.373 "write": true, 00:26:34.373 "unmap": true, 00:26:34.373 "flush": true, 00:26:34.373 "reset": true, 00:26:34.373 "nvme_admin": false, 00:26:34.373 "nvme_io": false, 00:26:34.373 "nvme_io_md": false, 00:26:34.373 "write_zeroes": true, 00:26:34.373 "zcopy": true, 00:26:34.373 "get_zone_info": false, 00:26:34.373 "zone_management": false, 00:26:34.373 "zone_append": false, 00:26:34.373 "compare": false, 00:26:34.373 "compare_and_write": false, 00:26:34.373 "abort": true, 00:26:34.373 "seek_hole": false, 00:26:34.373 "seek_data": false, 00:26:34.373 "copy": true, 00:26:34.373 "nvme_iov_md": false 00:26:34.373 }, 00:26:34.373 "memory_domains": [ 00:26:34.373 { 00:26:34.373 "dma_device_id": "system", 00:26:34.373 "dma_device_type": 1 00:26:34.373 }, 00:26:34.373 { 00:26:34.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:34.373 "dma_device_type": 2 00:26:34.373 } 00:26:34.373 ], 00:26:34.373 "driver_specific": { 00:26:34.373 "passthru": { 00:26:34.373 "name": "pt2", 00:26:34.373 "base_bdev_name": "malloc2" 00:26:34.373 } 00:26:34.373 } 00:26:34.373 }' 00:26:34.373 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:34.632 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:34.632 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:34.632 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:34.632 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:34.632 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:34.632 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:34.632 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:34.632 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:34.632 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:34.892 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:34.892 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:34.892 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:34.892 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:26:34.892 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:35.150 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:35.150 "name": "pt3", 00:26:35.150 "aliases": [ 00:26:35.150 "00000000-0000-0000-0000-000000000003" 00:26:35.150 ], 00:26:35.150 "product_name": "passthru", 00:26:35.150 "block_size": 512, 00:26:35.150 "num_blocks": 65536, 00:26:35.150 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:35.150 "assigned_rate_limits": { 00:26:35.150 "rw_ios_per_sec": 0, 00:26:35.150 "rw_mbytes_per_sec": 0, 00:26:35.150 "r_mbytes_per_sec": 0, 00:26:35.150 "w_mbytes_per_sec": 0 00:26:35.150 }, 00:26:35.150 "claimed": true, 00:26:35.150 "claim_type": "exclusive_write", 00:26:35.150 "zoned": false, 00:26:35.150 "supported_io_types": { 00:26:35.150 "read": true, 00:26:35.150 "write": true, 00:26:35.150 "unmap": true, 00:26:35.150 "flush": true, 00:26:35.150 "reset": true, 00:26:35.150 "nvme_admin": false, 00:26:35.150 "nvme_io": false, 00:26:35.150 "nvme_io_md": false, 00:26:35.150 "write_zeroes": true, 00:26:35.150 "zcopy": true, 00:26:35.150 "get_zone_info": false, 00:26:35.150 "zone_management": false, 00:26:35.150 "zone_append": false, 00:26:35.150 "compare": false, 00:26:35.150 "compare_and_write": false, 00:26:35.150 "abort": true, 00:26:35.150 "seek_hole": false, 00:26:35.150 "seek_data": false, 00:26:35.150 "copy": true, 00:26:35.150 "nvme_iov_md": false 00:26:35.150 }, 00:26:35.150 "memory_domains": [ 00:26:35.150 { 00:26:35.150 "dma_device_id": "system", 00:26:35.150 "dma_device_type": 1 00:26:35.150 }, 00:26:35.150 { 00:26:35.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:35.150 "dma_device_type": 2 00:26:35.150 } 00:26:35.150 ], 00:26:35.150 "driver_specific": { 00:26:35.150 "passthru": { 00:26:35.150 "name": "pt3", 00:26:35.150 "base_bdev_name": "malloc3" 00:26:35.150 } 00:26:35.150 } 00:26:35.150 }' 00:26:35.150 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:35.150 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:35.150 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:35.150 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:35.150 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:35.150 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:35.150 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:35.409 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:35.409 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:35.409 04:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:35.409 04:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:35.409 04:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:35.409 04:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:35.409 04:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:35.409 04:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:26:35.977 04:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:35.977 "name": "pt4", 00:26:35.977 "aliases": [ 00:26:35.977 "00000000-0000-0000-0000-000000000004" 00:26:35.977 ], 00:26:35.977 "product_name": "passthru", 00:26:35.977 "block_size": 512, 00:26:35.977 "num_blocks": 65536, 00:26:35.977 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:35.977 "assigned_rate_limits": { 00:26:35.977 "rw_ios_per_sec": 0, 00:26:35.977 "rw_mbytes_per_sec": 0, 00:26:35.977 "r_mbytes_per_sec": 0, 00:26:35.977 "w_mbytes_per_sec": 0 00:26:35.977 }, 00:26:35.977 "claimed": true, 00:26:35.977 "claim_type": "exclusive_write", 00:26:35.977 "zoned": false, 00:26:35.977 "supported_io_types": { 00:26:35.977 "read": true, 00:26:35.977 "write": true, 00:26:35.977 "unmap": true, 00:26:35.977 "flush": true, 00:26:35.977 "reset": true, 00:26:35.977 "nvme_admin": false, 00:26:35.977 "nvme_io": false, 00:26:35.977 "nvme_io_md": false, 00:26:35.977 "write_zeroes": true, 00:26:35.977 "zcopy": true, 00:26:35.977 "get_zone_info": false, 00:26:35.977 "zone_management": false, 00:26:35.977 "zone_append": false, 00:26:35.977 "compare": false, 00:26:35.977 "compare_and_write": false, 00:26:35.977 "abort": true, 00:26:35.977 "seek_hole": false, 00:26:35.977 "seek_data": false, 00:26:35.977 "copy": true, 00:26:35.977 "nvme_iov_md": false 00:26:35.977 }, 00:26:35.977 "memory_domains": [ 00:26:35.977 { 00:26:35.977 "dma_device_id": "system", 00:26:35.977 "dma_device_type": 1 00:26:35.977 }, 00:26:35.977 { 00:26:35.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:35.977 "dma_device_type": 2 00:26:35.977 } 00:26:35.977 ], 00:26:35.977 "driver_specific": { 00:26:35.977 "passthru": { 00:26:35.977 "name": "pt4", 00:26:35.977 "base_bdev_name": "malloc4" 00:26:35.977 } 00:26:35.977 } 00:26:35.977 }' 00:26:35.977 04:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:35.977 04:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:35.977 04:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:35.977 04:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:35.977 04:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:35.977 04:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:35.977 04:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:36.235 04:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:36.235 04:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:36.235 04:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:36.235 04:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:36.235 04:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:36.235 04:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:36.235 04:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:26:36.494 [2024-07-23 04:21:45.082093] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:36.494 04:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 2efc66f7-4e61-4b49-ba12-0da79acd1663 '!=' 2efc66f7-4e61-4b49-ba12-0da79acd1663 ']' 00:26:36.494 04:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:26:36.494 04:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:36.494 04:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:26:36.494 04:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:36.752 [2024-07-23 04:21:45.298317] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:36.752 04:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:36.752 04:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:36.752 04:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:36.752 04:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:36.752 04:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:36.752 04:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:36.752 04:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:36.752 04:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:36.752 04:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:36.752 04:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:36.752 04:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.752 04:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.319 04:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:37.319 "name": "raid_bdev1", 00:26:37.319 "uuid": "2efc66f7-4e61-4b49-ba12-0da79acd1663", 00:26:37.319 "strip_size_kb": 0, 00:26:37.319 "state": "online", 00:26:37.319 "raid_level": "raid1", 00:26:37.319 "superblock": true, 00:26:37.319 "num_base_bdevs": 4, 00:26:37.319 "num_base_bdevs_discovered": 3, 00:26:37.319 "num_base_bdevs_operational": 3, 00:26:37.319 "base_bdevs_list": [ 00:26:37.319 { 00:26:37.319 "name": null, 00:26:37.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:37.319 "is_configured": false, 00:26:37.319 "data_offset": 2048, 00:26:37.319 "data_size": 63488 00:26:37.319 }, 00:26:37.319 { 00:26:37.319 "name": "pt2", 00:26:37.319 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:37.319 "is_configured": true, 00:26:37.319 "data_offset": 2048, 00:26:37.319 "data_size": 63488 00:26:37.319 }, 00:26:37.319 { 00:26:37.319 "name": "pt3", 00:26:37.319 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:37.319 "is_configured": true, 00:26:37.319 "data_offset": 2048, 00:26:37.319 "data_size": 63488 00:26:37.319 }, 00:26:37.319 { 00:26:37.319 "name": "pt4", 00:26:37.319 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:37.319 "is_configured": true, 00:26:37.319 "data_offset": 2048, 00:26:37.319 "data_size": 63488 00:26:37.319 } 00:26:37.319 ] 00:26:37.319 }' 00:26:37.319 04:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:37.319 04:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.578 04:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:37.836 [2024-07-23 04:21:46.549616] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:37.836 [2024-07-23 04:21:46.549659] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:37.836 [2024-07-23 04:21:46.549752] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:37.836 [2024-07-23 04:21:46.549848] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:37.836 [2024-07-23 04:21:46.549865] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:26:37.836 04:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.836 04:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:26:38.094 04:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:26:38.094 04:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:26:38.094 04:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:26:38.094 04:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:26:38.094 04:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:38.352 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:26:38.352 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:26:38.352 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:38.610 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:26:38.610 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:26:38.610 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:38.868 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:26:38.868 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:26:38.868 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:26:38.868 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:26:38.868 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:39.126 [2024-07-23 04:21:47.672611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:39.126 [2024-07-23 04:21:47.672680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:39.126 [2024-07-23 04:21:47.672708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044180 00:26:39.126 [2024-07-23 04:21:47.672725] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:39.126 [2024-07-23 04:21:47.675590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:39.126 [2024-07-23 04:21:47.675628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:39.126 [2024-07-23 04:21:47.675734] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:39.126 [2024-07-23 04:21:47.675791] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:39.126 pt2 00:26:39.126 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:39.126 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:39.126 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:39.126 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:39.126 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:39.126 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:39.126 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:39.126 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:39.126 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:39.126 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:39.126 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:39.126 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:39.384 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:39.384 "name": "raid_bdev1", 00:26:39.384 "uuid": "2efc66f7-4e61-4b49-ba12-0da79acd1663", 00:26:39.384 "strip_size_kb": 0, 00:26:39.384 "state": "configuring", 00:26:39.384 "raid_level": "raid1", 00:26:39.384 "superblock": true, 00:26:39.384 "num_base_bdevs": 4, 00:26:39.384 "num_base_bdevs_discovered": 1, 00:26:39.384 "num_base_bdevs_operational": 3, 00:26:39.384 "base_bdevs_list": [ 00:26:39.384 { 00:26:39.384 "name": null, 00:26:39.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:39.384 "is_configured": false, 00:26:39.384 "data_offset": 2048, 00:26:39.384 "data_size": 63488 00:26:39.384 }, 00:26:39.384 { 00:26:39.384 "name": "pt2", 00:26:39.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:39.384 "is_configured": true, 00:26:39.384 "data_offset": 2048, 00:26:39.384 "data_size": 63488 00:26:39.384 }, 00:26:39.384 { 00:26:39.384 "name": null, 00:26:39.384 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:39.384 "is_configured": false, 00:26:39.384 "data_offset": 2048, 00:26:39.384 "data_size": 63488 00:26:39.384 }, 00:26:39.384 { 00:26:39.384 "name": null, 00:26:39.384 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:39.384 "is_configured": false, 00:26:39.384 "data_offset": 2048, 00:26:39.384 "data_size": 63488 00:26:39.384 } 00:26:39.384 ] 00:26:39.384 }' 00:26:39.384 04:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:39.384 04:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.949 04:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:26:39.949 04:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:26:39.949 04:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:39.949 [2024-07-23 04:21:48.675337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:39.949 [2024-07-23 04:21:48.675413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:39.949 [2024-07-23 04:21:48.675443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044a80 00:26:39.949 [2024-07-23 04:21:48.675460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:39.949 [2024-07-23 04:21:48.676068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:39.949 [2024-07-23 04:21:48.676093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:39.949 [2024-07-23 04:21:48.676210] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:39.949 [2024-07-23 04:21:48.676242] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:39.949 pt3 00:26:39.949 04:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:39.949 04:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:39.949 04:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:39.949 04:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:39.949 04:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:39.949 04:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:39.949 04:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:39.949 04:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:39.949 04:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:39.949 04:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:39.949 04:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:39.949 04:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.208 04:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:40.208 "name": "raid_bdev1", 00:26:40.208 "uuid": "2efc66f7-4e61-4b49-ba12-0da79acd1663", 00:26:40.208 "strip_size_kb": 0, 00:26:40.208 "state": "configuring", 00:26:40.208 "raid_level": "raid1", 00:26:40.208 "superblock": true, 00:26:40.208 "num_base_bdevs": 4, 00:26:40.208 "num_base_bdevs_discovered": 2, 00:26:40.208 "num_base_bdevs_operational": 3, 00:26:40.208 "base_bdevs_list": [ 00:26:40.208 { 00:26:40.208 "name": null, 00:26:40.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:40.208 "is_configured": false, 00:26:40.208 "data_offset": 2048, 00:26:40.208 "data_size": 63488 00:26:40.208 }, 00:26:40.208 { 00:26:40.208 "name": "pt2", 00:26:40.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:40.208 "is_configured": true, 00:26:40.208 "data_offset": 2048, 00:26:40.208 "data_size": 63488 00:26:40.208 }, 00:26:40.208 { 00:26:40.208 "name": "pt3", 00:26:40.208 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:40.208 "is_configured": true, 00:26:40.208 "data_offset": 2048, 00:26:40.208 "data_size": 63488 00:26:40.208 }, 00:26:40.208 { 00:26:40.208 "name": null, 00:26:40.208 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:40.208 "is_configured": false, 00:26:40.208 "data_offset": 2048, 00:26:40.208 "data_size": 63488 00:26:40.208 } 00:26:40.208 ] 00:26:40.208 }' 00:26:40.208 04:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:40.208 04:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.775 04:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:26:40.775 04:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:26:40.775 04:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:26:40.775 04:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:41.035 [2024-07-23 04:21:49.714202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:41.035 [2024-07-23 04:21:49.714273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:41.035 [2024-07-23 04:21:49.714302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044d80 00:26:41.035 [2024-07-23 04:21:49.714323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:41.035 [2024-07-23 04:21:49.714941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:41.035 [2024-07-23 04:21:49.714966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:41.035 [2024-07-23 04:21:49.715064] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:41.035 [2024-07-23 04:21:49.715095] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:41.035 [2024-07-23 04:21:49.715324] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000044780 00:26:41.035 [2024-07-23 04:21:49.715341] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:41.035 [2024-07-23 04:21:49.715688] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:26:41.035 [2024-07-23 04:21:49.715928] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000044780 00:26:41.035 [2024-07-23 04:21:49.715947] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000044780 00:26:41.035 [2024-07-23 04:21:49.716132] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:41.035 pt4 00:26:41.035 04:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:41.035 04:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:41.035 04:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:41.035 04:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:41.035 04:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:41.035 04:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:41.035 04:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:41.035 04:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:41.035 04:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:41.035 04:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:41.035 04:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:41.035 04:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:41.294 04:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:41.294 "name": "raid_bdev1", 00:26:41.294 "uuid": "2efc66f7-4e61-4b49-ba12-0da79acd1663", 00:26:41.294 "strip_size_kb": 0, 00:26:41.294 "state": "online", 00:26:41.294 "raid_level": "raid1", 00:26:41.294 "superblock": true, 00:26:41.294 "num_base_bdevs": 4, 00:26:41.294 "num_base_bdevs_discovered": 3, 00:26:41.294 "num_base_bdevs_operational": 3, 00:26:41.294 "base_bdevs_list": [ 00:26:41.294 { 00:26:41.294 "name": null, 00:26:41.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:41.294 "is_configured": false, 00:26:41.294 "data_offset": 2048, 00:26:41.294 "data_size": 63488 00:26:41.294 }, 00:26:41.294 { 00:26:41.294 "name": "pt2", 00:26:41.294 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:41.294 "is_configured": true, 00:26:41.294 "data_offset": 2048, 00:26:41.294 "data_size": 63488 00:26:41.294 }, 00:26:41.294 { 00:26:41.294 "name": "pt3", 00:26:41.294 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:41.294 "is_configured": true, 00:26:41.295 "data_offset": 2048, 00:26:41.295 "data_size": 63488 00:26:41.295 }, 00:26:41.295 { 00:26:41.295 "name": "pt4", 00:26:41.295 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:41.295 "is_configured": true, 00:26:41.295 "data_offset": 2048, 00:26:41.295 "data_size": 63488 00:26:41.295 } 00:26:41.295 ] 00:26:41.295 }' 00:26:41.295 04:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:41.295 04:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.863 04:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:42.122 [2024-07-23 04:21:50.756985] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:42.122 [2024-07-23 04:21:50.757027] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:42.122 [2024-07-23 04:21:50.757117] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:42.122 [2024-07-23 04:21:50.757217] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:42.122 [2024-07-23 04:21:50.757238] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000044780 name raid_bdev1, state offline 00:26:42.122 04:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:42.122 04:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:26:42.381 04:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:26:42.381 04:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:26:42.381 04:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:26:42.381 04:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:26:42.381 04:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:42.640 04:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:42.899 [2024-07-23 04:21:51.450835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:42.899 [2024-07-23 04:21:51.450921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:42.899 [2024-07-23 04:21:51.450947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045080 00:26:42.899 [2024-07-23 04:21:51.450966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:42.899 [2024-07-23 04:21:51.453870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:42.899 [2024-07-23 04:21:51.453911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:42.899 [2024-07-23 04:21:51.454011] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:42.899 [2024-07-23 04:21:51.454076] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:42.899 [2024-07-23 04:21:51.454316] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:42.899 [2024-07-23 04:21:51.454351] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:42.899 [2024-07-23 04:21:51.454376] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000045680 name raid_bdev1, state configuring 00:26:42.899 [2024-07-23 04:21:51.454459] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:42.899 [2024-07-23 04:21:51.454592] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:42.899 pt1 00:26:42.899 04:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:26:42.900 04:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:42.900 04:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:42.900 04:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:42.900 04:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:42.900 04:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:42.900 04:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:42.900 04:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:42.900 04:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:42.900 04:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:42.900 04:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:42.900 04:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:42.900 04:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:43.166 04:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:43.166 "name": "raid_bdev1", 00:26:43.166 "uuid": "2efc66f7-4e61-4b49-ba12-0da79acd1663", 00:26:43.166 "strip_size_kb": 0, 00:26:43.166 "state": "configuring", 00:26:43.166 "raid_level": "raid1", 00:26:43.166 "superblock": true, 00:26:43.166 "num_base_bdevs": 4, 00:26:43.166 "num_base_bdevs_discovered": 2, 00:26:43.166 "num_base_bdevs_operational": 3, 00:26:43.166 "base_bdevs_list": [ 00:26:43.166 { 00:26:43.166 "name": null, 00:26:43.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:43.167 "is_configured": false, 00:26:43.167 "data_offset": 2048, 00:26:43.167 "data_size": 63488 00:26:43.167 }, 00:26:43.167 { 00:26:43.167 "name": "pt2", 00:26:43.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:43.167 "is_configured": true, 00:26:43.167 "data_offset": 2048, 00:26:43.167 "data_size": 63488 00:26:43.167 }, 00:26:43.167 { 00:26:43.167 "name": "pt3", 00:26:43.167 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:43.167 "is_configured": true, 00:26:43.167 "data_offset": 2048, 00:26:43.167 "data_size": 63488 00:26:43.167 }, 00:26:43.167 { 00:26:43.167 "name": null, 00:26:43.167 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:43.167 "is_configured": false, 00:26:43.167 "data_offset": 2048, 00:26:43.167 "data_size": 63488 00:26:43.167 } 00:26:43.167 ] 00:26:43.167 }' 00:26:43.167 04:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:43.167 04:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.797 04:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:26:43.797 04:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:43.797 04:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:26:43.797 04:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:44.056 [2024-07-23 04:21:52.714259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:44.056 [2024-07-23 04:21:52.714329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:44.056 [2024-07-23 04:21:52.714360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045c80 00:26:44.056 [2024-07-23 04:21:52.714377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:44.056 [2024-07-23 04:21:52.714982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:44.056 [2024-07-23 04:21:52.715009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:44.056 [2024-07-23 04:21:52.715108] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:44.056 [2024-07-23 04:21:52.715154] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:44.056 [2024-07-23 04:21:52.715361] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000045980 00:26:44.056 [2024-07-23 04:21:52.715377] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:44.056 [2024-07-23 04:21:52.715702] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:26:44.056 [2024-07-23 04:21:52.715923] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000045980 00:26:44.056 [2024-07-23 04:21:52.715941] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000045980 00:26:44.056 [2024-07-23 04:21:52.716155] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:44.056 pt4 00:26:44.056 04:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:44.056 04:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:44.056 04:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:44.056 04:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:44.056 04:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:44.056 04:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:44.056 04:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:44.056 04:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:44.056 04:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:44.056 04:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:44.056 04:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.056 04:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:44.315 04:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:44.315 "name": "raid_bdev1", 00:26:44.315 "uuid": "2efc66f7-4e61-4b49-ba12-0da79acd1663", 00:26:44.315 "strip_size_kb": 0, 00:26:44.315 "state": "online", 00:26:44.315 "raid_level": "raid1", 00:26:44.315 "superblock": true, 00:26:44.315 "num_base_bdevs": 4, 00:26:44.315 "num_base_bdevs_discovered": 3, 00:26:44.315 "num_base_bdevs_operational": 3, 00:26:44.315 "base_bdevs_list": [ 00:26:44.315 { 00:26:44.315 "name": null, 00:26:44.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:44.315 "is_configured": false, 00:26:44.315 "data_offset": 2048, 00:26:44.315 "data_size": 63488 00:26:44.315 }, 00:26:44.315 { 00:26:44.315 "name": "pt2", 00:26:44.315 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:44.315 "is_configured": true, 00:26:44.315 "data_offset": 2048, 00:26:44.315 "data_size": 63488 00:26:44.315 }, 00:26:44.315 { 00:26:44.315 "name": "pt3", 00:26:44.315 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:44.315 "is_configured": true, 00:26:44.315 "data_offset": 2048, 00:26:44.315 "data_size": 63488 00:26:44.315 }, 00:26:44.315 { 00:26:44.315 "name": "pt4", 00:26:44.315 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:44.315 "is_configured": true, 00:26:44.315 "data_offset": 2048, 00:26:44.315 "data_size": 63488 00:26:44.315 } 00:26:44.315 ] 00:26:44.315 }' 00:26:44.315 04:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:44.315 04:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:44.883 04:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:26:44.883 04:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:45.142 04:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:26:45.142 04:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:45.142 04:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:26:45.402 [2024-07-23 04:21:53.978038] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:45.402 04:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 2efc66f7-4e61-4b49-ba12-0da79acd1663 '!=' 2efc66f7-4e61-4b49-ba12-0da79acd1663 ']' 00:26:45.402 04:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2751325 00:26:45.402 04:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2751325 ']' 00:26:45.402 04:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2751325 00:26:45.402 04:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:26:45.402 04:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:45.402 04:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2751325 00:26:45.402 04:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:45.402 04:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:45.402 04:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2751325' 00:26:45.402 killing process with pid 2751325 00:26:45.402 04:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2751325 00:26:45.402 [2024-07-23 04:21:54.055312] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:45.402 [2024-07-23 04:21:54.055427] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:45.402 04:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2751325 00:26:45.402 [2024-07-23 04:21:54.055519] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:45.402 [2024-07-23 04:21:54.055540] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000045980 name raid_bdev1, state offline 00:26:45.971 [2024-07-23 04:21:54.519348] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:47.881 04:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:26:47.881 00:26:47.881 real 0m26.748s 00:26:47.881 user 0m46.958s 00:26:47.881 sys 0m4.499s 00:26:47.881 04:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:47.881 04:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.881 ************************************ 00:26:47.881 END TEST raid_superblock_test 00:26:47.881 ************************************ 00:26:47.881 04:21:56 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:47.881 04:21:56 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:26:47.881 04:21:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:47.881 04:21:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:47.881 04:21:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:47.881 ************************************ 00:26:47.881 START TEST raid_read_error_test 00:26:47.881 ************************************ 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 read 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.qnSly4BjLq 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2756227 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2756227 /var/tmp/spdk-raid.sock 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2756227 ']' 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:47.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:47.881 04:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.881 [2024-07-23 04:21:56.410882] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:47.881 [2024-07-23 04:21:56.411012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2756227 ] 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:47.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.881 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:47.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.882 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:47.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.882 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:47.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.882 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:47.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.882 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:47.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.882 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:47.882 [2024-07-23 04:21:56.627986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.187 [2024-07-23 04:21:56.909368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.756 [2024-07-23 04:21:57.264618] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:48.756 [2024-07-23 04:21:57.264654] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:48.756 04:21:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:48.756 04:21:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:26:48.756 04:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:48.756 04:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:49.016 BaseBdev1_malloc 00:26:49.016 04:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:26:49.275 true 00:26:49.276 04:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:49.535 [2024-07-23 04:21:58.170572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:49.535 [2024-07-23 04:21:58.170633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:49.535 [2024-07-23 04:21:58.170659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:26:49.535 [2024-07-23 04:21:58.170681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:49.535 [2024-07-23 04:21:58.173468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:49.535 [2024-07-23 04:21:58.173507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:49.535 BaseBdev1 00:26:49.535 04:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:49.535 04:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:49.795 BaseBdev2_malloc 00:26:49.795 04:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:26:50.054 true 00:26:50.054 04:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:50.314 [2024-07-23 04:21:58.901640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:50.314 [2024-07-23 04:21:58.901697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:50.314 [2024-07-23 04:21:58.901723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:26:50.314 [2024-07-23 04:21:58.901744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:50.314 [2024-07-23 04:21:58.904508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:50.314 [2024-07-23 04:21:58.904546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:50.314 BaseBdev2 00:26:50.314 04:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:50.314 04:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:50.574 BaseBdev3_malloc 00:26:50.574 04:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:26:50.833 true 00:26:50.833 04:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:51.093 [2024-07-23 04:21:59.620923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:51.093 [2024-07-23 04:21:59.620980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:51.093 [2024-07-23 04:21:59.621007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:26:51.093 [2024-07-23 04:21:59.621024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:51.093 [2024-07-23 04:21:59.623822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:51.093 [2024-07-23 04:21:59.623862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:51.093 BaseBdev3 00:26:51.093 04:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:51.093 04:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:51.353 BaseBdev4_malloc 00:26:51.353 04:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:26:51.353 true 00:26:51.612 04:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:51.612 [2024-07-23 04:22:00.355642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:51.612 [2024-07-23 04:22:00.355707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:51.612 [2024-07-23 04:22:00.355735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:26:51.612 [2024-07-23 04:22:00.355753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:51.612 [2024-07-23 04:22:00.358557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:51.612 [2024-07-23 04:22:00.358594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:51.612 BaseBdev4 00:26:51.612 04:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:26:51.872 [2024-07-23 04:22:00.568262] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:51.872 [2024-07-23 04:22:00.570642] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:51.872 [2024-07-23 04:22:00.570741] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:51.872 [2024-07-23 04:22:00.570823] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:51.872 [2024-07-23 04:22:00.571129] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042c80 00:26:51.872 [2024-07-23 04:22:00.571160] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:51.872 [2024-07-23 04:22:00.571494] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:26:51.872 [2024-07-23 04:22:00.571780] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042c80 00:26:51.872 [2024-07-23 04:22:00.571795] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042c80 00:26:51.872 [2024-07-23 04:22:00.571986] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:51.872 04:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:51.872 04:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:51.872 04:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:51.872 04:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:51.872 04:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:51.872 04:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:51.872 04:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:51.872 04:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:51.872 04:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:51.872 04:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:51.872 04:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.872 04:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.132 04:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:52.132 "name": "raid_bdev1", 00:26:52.132 "uuid": "5bec8fcc-fb51-43b7-b6a5-e97dac433524", 00:26:52.132 "strip_size_kb": 0, 00:26:52.132 "state": "online", 00:26:52.132 "raid_level": "raid1", 00:26:52.132 "superblock": true, 00:26:52.132 "num_base_bdevs": 4, 00:26:52.132 "num_base_bdevs_discovered": 4, 00:26:52.132 "num_base_bdevs_operational": 4, 00:26:52.132 "base_bdevs_list": [ 00:26:52.132 { 00:26:52.132 "name": "BaseBdev1", 00:26:52.132 "uuid": "619b147f-ca26-5f06-85eb-f8b12a1fc5c4", 00:26:52.132 "is_configured": true, 00:26:52.132 "data_offset": 2048, 00:26:52.132 "data_size": 63488 00:26:52.132 }, 00:26:52.132 { 00:26:52.132 "name": "BaseBdev2", 00:26:52.132 "uuid": "1278a987-3d5e-50f0-8b72-33b1bc7e5ded", 00:26:52.132 "is_configured": true, 00:26:52.132 "data_offset": 2048, 00:26:52.132 "data_size": 63488 00:26:52.132 }, 00:26:52.132 { 00:26:52.132 "name": "BaseBdev3", 00:26:52.132 "uuid": "d78a7858-cc1e-5f62-8ada-15a67815a294", 00:26:52.132 "is_configured": true, 00:26:52.132 "data_offset": 2048, 00:26:52.132 "data_size": 63488 00:26:52.132 }, 00:26:52.132 { 00:26:52.132 "name": "BaseBdev4", 00:26:52.132 "uuid": "e04e890d-c33d-595b-817e-48e13a39d5f8", 00:26:52.132 "is_configured": true, 00:26:52.132 "data_offset": 2048, 00:26:52.132 "data_size": 63488 00:26:52.132 } 00:26:52.132 ] 00:26:52.132 }' 00:26:52.132 04:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:52.132 04:22:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.699 04:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:26:52.699 04:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:26:52.958 [2024-07-23 04:22:01.492864] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:26:53.894 04:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:26:53.894 04:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:26:53.894 04:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:26:53.894 04:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:26:53.894 04:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:26:53.894 04:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:53.894 04:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:53.894 04:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:53.894 04:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:53.894 04:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:53.894 04:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:53.894 04:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:53.894 04:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:53.894 04:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:53.894 04:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:53.894 04:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.894 04:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.154 04:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:54.154 "name": "raid_bdev1", 00:26:54.154 "uuid": "5bec8fcc-fb51-43b7-b6a5-e97dac433524", 00:26:54.154 "strip_size_kb": 0, 00:26:54.154 "state": "online", 00:26:54.154 "raid_level": "raid1", 00:26:54.154 "superblock": true, 00:26:54.154 "num_base_bdevs": 4, 00:26:54.154 "num_base_bdevs_discovered": 4, 00:26:54.154 "num_base_bdevs_operational": 4, 00:26:54.154 "base_bdevs_list": [ 00:26:54.154 { 00:26:54.154 "name": "BaseBdev1", 00:26:54.154 "uuid": "619b147f-ca26-5f06-85eb-f8b12a1fc5c4", 00:26:54.154 "is_configured": true, 00:26:54.154 "data_offset": 2048, 00:26:54.154 "data_size": 63488 00:26:54.154 }, 00:26:54.154 { 00:26:54.154 "name": "BaseBdev2", 00:26:54.154 "uuid": "1278a987-3d5e-50f0-8b72-33b1bc7e5ded", 00:26:54.154 "is_configured": true, 00:26:54.154 "data_offset": 2048, 00:26:54.154 "data_size": 63488 00:26:54.154 }, 00:26:54.154 { 00:26:54.154 "name": "BaseBdev3", 00:26:54.154 "uuid": "d78a7858-cc1e-5f62-8ada-15a67815a294", 00:26:54.154 "is_configured": true, 00:26:54.154 "data_offset": 2048, 00:26:54.154 "data_size": 63488 00:26:54.154 }, 00:26:54.154 { 00:26:54.154 "name": "BaseBdev4", 00:26:54.154 "uuid": "e04e890d-c33d-595b-817e-48e13a39d5f8", 00:26:54.154 "is_configured": true, 00:26:54.154 "data_offset": 2048, 00:26:54.154 "data_size": 63488 00:26:54.154 } 00:26:54.154 ] 00:26:54.154 }' 00:26:54.154 04:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:54.154 04:22:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.722 04:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:54.981 [2024-07-23 04:22:03.626285] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:54.981 [2024-07-23 04:22:03.626328] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:54.981 [2024-07-23 04:22:03.629695] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:54.981 [2024-07-23 04:22:03.629757] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:54.981 [2024-07-23 04:22:03.629904] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:54.981 [2024-07-23 04:22:03.629931] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042c80 name raid_bdev1, state offline 00:26:54.981 0 00:26:54.981 04:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2756227 00:26:54.981 04:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2756227 ']' 00:26:54.981 04:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2756227 00:26:54.981 04:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:26:54.981 04:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:54.981 04:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2756227 00:26:54.981 04:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:54.981 04:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:54.981 04:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2756227' 00:26:54.981 killing process with pid 2756227 00:26:54.981 04:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2756227 00:26:54.981 [2024-07-23 04:22:03.701261] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:54.981 04:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2756227 00:26:55.548 [2024-07-23 04:22:04.072250] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:57.451 04:22:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.qnSly4BjLq 00:26:57.451 04:22:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:26:57.451 04:22:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:26:57.451 04:22:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:26:57.451 04:22:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:26:57.451 04:22:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:57.451 04:22:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:26:57.451 04:22:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:26:57.451 00:26:57.451 real 0m9.608s 00:26:57.451 user 0m13.724s 00:26:57.451 sys 0m1.486s 00:26:57.451 04:22:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:57.451 04:22:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:57.451 ************************************ 00:26:57.451 END TEST raid_read_error_test 00:26:57.451 ************************************ 00:26:57.451 04:22:05 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:57.451 04:22:05 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:26:57.451 04:22:05 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:57.451 04:22:05 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:57.451 04:22:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:57.451 ************************************ 00:26:57.451 START TEST raid_write_error_test 00:26:57.451 ************************************ 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 write 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:26:57.451 04:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:26:57.451 04:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.RNomydPwOS 00:26:57.451 04:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2757934 00:26:57.451 04:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2757934 /var/tmp/spdk-raid.sock 00:26:57.451 04:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:57.451 04:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2757934 ']' 00:26:57.451 04:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:57.451 04:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:57.451 04:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:57.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:57.451 04:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:57.451 04:22:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:57.451 [2024-07-23 04:22:06.108508] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:57.451 [2024-07-23 04:22:06.108629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2757934 ] 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:57.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:57.711 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:57.711 [2024-07-23 04:22:06.334056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.969 [2024-07-23 04:22:06.616108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.228 [2024-07-23 04:22:06.959582] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:58.228 [2024-07-23 04:22:06.959619] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:58.485 04:22:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:58.485 04:22:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:26:58.485 04:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:58.485 04:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:58.744 BaseBdev1_malloc 00:26:58.744 04:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:26:59.003 true 00:26:59.003 04:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:59.261 [2024-07-23 04:22:07.866968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:59.261 [2024-07-23 04:22:07.867029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:59.261 [2024-07-23 04:22:07.867056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:26:59.261 [2024-07-23 04:22:07.867078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:59.261 [2024-07-23 04:22:07.869886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:59.261 [2024-07-23 04:22:07.869930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:59.261 BaseBdev1 00:26:59.261 04:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:59.261 04:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:59.520 BaseBdev2_malloc 00:26:59.520 04:22:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:26:59.778 true 00:26:59.778 04:22:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:00.037 [2024-07-23 04:22:08.605117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:00.037 [2024-07-23 04:22:08.605179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:00.037 [2024-07-23 04:22:08.605206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:27:00.037 [2024-07-23 04:22:08.605227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:00.037 [2024-07-23 04:22:08.607966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:00.037 [2024-07-23 04:22:08.608004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:00.037 BaseBdev2 00:27:00.037 04:22:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:00.037 04:22:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:00.295 BaseBdev3_malloc 00:27:00.295 04:22:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:27:00.555 true 00:27:00.555 04:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:00.555 [2024-07-23 04:22:09.337802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:00.555 [2024-07-23 04:22:09.337858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:00.555 [2024-07-23 04:22:09.337885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:27:00.555 [2024-07-23 04:22:09.337903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:00.848 [2024-07-23 04:22:09.340723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:00.848 [2024-07-23 04:22:09.340761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:00.848 BaseBdev3 00:27:00.848 04:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:00.848 04:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:00.848 BaseBdev4_malloc 00:27:01.107 04:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:27:01.107 true 00:27:01.107 04:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:01.365 [2024-07-23 04:22:10.076397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:01.365 [2024-07-23 04:22:10.076469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:01.365 [2024-07-23 04:22:10.076498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:27:01.365 [2024-07-23 04:22:10.076520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:01.365 [2024-07-23 04:22:10.079331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:01.365 [2024-07-23 04:22:10.079369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:01.365 BaseBdev4 00:27:01.365 04:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:27:01.624 [2024-07-23 04:22:10.301066] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:01.624 [2024-07-23 04:22:10.303427] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:01.624 [2024-07-23 04:22:10.303529] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:01.624 [2024-07-23 04:22:10.303608] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:01.624 [2024-07-23 04:22:10.303909] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042c80 00:27:01.624 [2024-07-23 04:22:10.303930] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:01.624 [2024-07-23 04:22:10.304278] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:27:01.624 [2024-07-23 04:22:10.304560] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042c80 00:27:01.624 [2024-07-23 04:22:10.304576] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042c80 00:27:01.624 [2024-07-23 04:22:10.304780] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:01.624 04:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:01.624 04:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:01.624 04:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:01.624 04:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:01.624 04:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:01.624 04:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:01.624 04:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:01.624 04:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:01.624 04:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:01.624 04:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:01.624 04:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.624 04:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.883 04:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:01.883 "name": "raid_bdev1", 00:27:01.883 "uuid": "7dd0eca4-b7ae-4b3b-aab5-c0d51d93c7b5", 00:27:01.883 "strip_size_kb": 0, 00:27:01.883 "state": "online", 00:27:01.883 "raid_level": "raid1", 00:27:01.883 "superblock": true, 00:27:01.883 "num_base_bdevs": 4, 00:27:01.883 "num_base_bdevs_discovered": 4, 00:27:01.883 "num_base_bdevs_operational": 4, 00:27:01.883 "base_bdevs_list": [ 00:27:01.883 { 00:27:01.883 "name": "BaseBdev1", 00:27:01.883 "uuid": "37544db9-8b6a-5021-8d0d-5a101382d6fb", 00:27:01.883 "is_configured": true, 00:27:01.883 "data_offset": 2048, 00:27:01.883 "data_size": 63488 00:27:01.883 }, 00:27:01.883 { 00:27:01.883 "name": "BaseBdev2", 00:27:01.883 "uuid": "b1dc08d1-207a-51ff-8803-67c9394f6bb8", 00:27:01.883 "is_configured": true, 00:27:01.883 "data_offset": 2048, 00:27:01.883 "data_size": 63488 00:27:01.883 }, 00:27:01.883 { 00:27:01.883 "name": "BaseBdev3", 00:27:01.883 "uuid": "6b5295c5-70c5-58f9-974e-c44a2ac2171f", 00:27:01.883 "is_configured": true, 00:27:01.883 "data_offset": 2048, 00:27:01.883 "data_size": 63488 00:27:01.883 }, 00:27:01.883 { 00:27:01.883 "name": "BaseBdev4", 00:27:01.883 "uuid": "a7765c0e-e864-5ec9-9641-3f843f396961", 00:27:01.883 "is_configured": true, 00:27:01.883 "data_offset": 2048, 00:27:01.883 "data_size": 63488 00:27:01.883 } 00:27:01.883 ] 00:27:01.883 }' 00:27:01.883 04:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:01.883 04:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.449 04:22:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:27:02.449 04:22:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:02.449 [2024-07-23 04:22:11.229544] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:27:03.385 04:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:27:03.644 [2024-07-23 04:22:12.345446] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:27:03.644 [2024-07-23 04:22:12.345508] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:03.644 [2024-07-23 04:22:12.345759] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000108b0 00:27:03.644 04:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:27:03.644 04:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:27:03.644 04:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:27:03.644 04:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:27:03.644 04:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:03.644 04:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:03.644 04:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:03.644 04:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:03.644 04:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:03.644 04:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:03.644 04:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:03.644 04:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:03.644 04:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:03.644 04:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:03.644 04:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:03.644 04:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:03.903 04:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:03.903 "name": "raid_bdev1", 00:27:03.903 "uuid": "7dd0eca4-b7ae-4b3b-aab5-c0d51d93c7b5", 00:27:03.903 "strip_size_kb": 0, 00:27:03.903 "state": "online", 00:27:03.903 "raid_level": "raid1", 00:27:03.903 "superblock": true, 00:27:03.903 "num_base_bdevs": 4, 00:27:03.903 "num_base_bdevs_discovered": 3, 00:27:03.903 "num_base_bdevs_operational": 3, 00:27:03.903 "base_bdevs_list": [ 00:27:03.903 { 00:27:03.903 "name": null, 00:27:03.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:03.903 "is_configured": false, 00:27:03.903 "data_offset": 2048, 00:27:03.903 "data_size": 63488 00:27:03.903 }, 00:27:03.903 { 00:27:03.903 "name": "BaseBdev2", 00:27:03.903 "uuid": "b1dc08d1-207a-51ff-8803-67c9394f6bb8", 00:27:03.903 "is_configured": true, 00:27:03.903 "data_offset": 2048, 00:27:03.903 "data_size": 63488 00:27:03.903 }, 00:27:03.903 { 00:27:03.903 "name": "BaseBdev3", 00:27:03.903 "uuid": "6b5295c5-70c5-58f9-974e-c44a2ac2171f", 00:27:03.903 "is_configured": true, 00:27:03.903 "data_offset": 2048, 00:27:03.903 "data_size": 63488 00:27:03.903 }, 00:27:03.903 { 00:27:03.903 "name": "BaseBdev4", 00:27:03.903 "uuid": "a7765c0e-e864-5ec9-9641-3f843f396961", 00:27:03.903 "is_configured": true, 00:27:03.903 "data_offset": 2048, 00:27:03.903 "data_size": 63488 00:27:03.903 } 00:27:03.903 ] 00:27:03.903 }' 00:27:03.903 04:22:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:03.903 04:22:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:04.470 04:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:04.727 [2024-07-23 04:22:13.388607] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:04.727 [2024-07-23 04:22:13.388648] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:04.727 [2024-07-23 04:22:13.391969] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:04.727 [2024-07-23 04:22:13.392019] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:04.727 [2024-07-23 04:22:13.392157] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:04.727 [2024-07-23 04:22:13.392175] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042c80 name raid_bdev1, state offline 00:27:04.727 0 00:27:04.727 04:22:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2757934 00:27:04.727 04:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2757934 ']' 00:27:04.727 04:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2757934 00:27:04.727 04:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:27:04.727 04:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:04.727 04:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2757934 00:27:04.727 04:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:04.727 04:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:04.727 04:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2757934' 00:27:04.727 killing process with pid 2757934 00:27:04.727 04:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2757934 00:27:04.727 [2024-07-23 04:22:13.464559] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:04.727 04:22:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2757934 00:27:05.292 [2024-07-23 04:22:13.823127] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:07.194 04:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.RNomydPwOS 00:27:07.194 04:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:27:07.194 04:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:27:07.194 04:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:27:07.194 04:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:27:07.194 04:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:07.194 04:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:27:07.194 04:22:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:27:07.194 00:27:07.194 real 0m9.550s 00:27:07.194 user 0m13.711s 00:27:07.194 sys 0m1.479s 00:27:07.194 04:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:07.194 04:22:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.194 ************************************ 00:27:07.194 END TEST raid_write_error_test 00:27:07.194 ************************************ 00:27:07.194 04:22:15 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:07.194 04:22:15 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' true = true ']' 00:27:07.194 04:22:15 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:27:07.194 04:22:15 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:27:07.194 04:22:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:27:07.194 04:22:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:07.194 04:22:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:07.194 ************************************ 00:27:07.194 START TEST raid_rebuild_test 00:27:07.194 ************************************ 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false false true 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=2759608 00:27:07.194 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 2759608 /var/tmp/spdk-raid.sock 00:27:07.195 04:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:07.195 04:22:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 2759608 ']' 00:27:07.195 04:22:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:07.195 04:22:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:07.195 04:22:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:07.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:07.195 04:22:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:07.195 04:22:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.195 [2024-07-23 04:22:15.721931] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:27:07.195 [2024-07-23 04:22:15.722051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2759608 ] 00:27:07.195 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:07.195 Zero copy mechanism will not be used. 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:07.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:07.195 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:07.195 [2024-07-23 04:22:15.946654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.454 [2024-07-23 04:22:16.234070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.020 [2024-07-23 04:22:16.560070] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:08.020 [2024-07-23 04:22:16.560106] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:08.020 04:22:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:08.020 04:22:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:27:08.020 04:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:08.020 04:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:08.279 BaseBdev1_malloc 00:27:08.279 04:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:08.537 [2024-07-23 04:22:17.205432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:08.537 [2024-07-23 04:22:17.205498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:08.537 [2024-07-23 04:22:17.205527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:27:08.537 [2024-07-23 04:22:17.205546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:08.537 [2024-07-23 04:22:17.208301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:08.537 [2024-07-23 04:22:17.208341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:08.537 BaseBdev1 00:27:08.537 04:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:08.537 04:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:08.796 BaseBdev2_malloc 00:27:08.796 04:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:09.055 [2024-07-23 04:22:17.708432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:09.055 [2024-07-23 04:22:17.708494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:09.055 [2024-07-23 04:22:17.708522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:27:09.055 [2024-07-23 04:22:17.708551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:09.055 [2024-07-23 04:22:17.711288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:09.055 [2024-07-23 04:22:17.711324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:09.055 BaseBdev2 00:27:09.055 04:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:09.313 spare_malloc 00:27:09.313 04:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:09.572 spare_delay 00:27:09.572 04:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:09.831 [2024-07-23 04:22:18.436112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:09.831 [2024-07-23 04:22:18.436179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:09.831 [2024-07-23 04:22:18.436207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:27:09.831 [2024-07-23 04:22:18.436225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:09.831 [2024-07-23 04:22:18.438991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:09.831 [2024-07-23 04:22:18.439030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:09.831 spare 00:27:09.831 04:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:10.089 [2024-07-23 04:22:18.664737] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:10.089 [2024-07-23 04:22:18.667056] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:10.089 [2024-07-23 04:22:18.667178] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:27:10.089 [2024-07-23 04:22:18.667198] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:10.089 [2024-07-23 04:22:18.667588] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:27:10.089 [2024-07-23 04:22:18.667853] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:27:10.089 [2024-07-23 04:22:18.667871] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:27:10.089 [2024-07-23 04:22:18.668106] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:10.089 04:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:10.089 04:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:10.089 04:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:10.089 04:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:10.089 04:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:10.089 04:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:10.089 04:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:10.089 04:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:10.089 04:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:10.089 04:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:10.089 04:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.089 04:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:10.348 04:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:10.348 "name": "raid_bdev1", 00:27:10.348 "uuid": "c40561c4-45c3-4d76-8bcd-a9109de42e2f", 00:27:10.348 "strip_size_kb": 0, 00:27:10.348 "state": "online", 00:27:10.348 "raid_level": "raid1", 00:27:10.348 "superblock": false, 00:27:10.348 "num_base_bdevs": 2, 00:27:10.348 "num_base_bdevs_discovered": 2, 00:27:10.348 "num_base_bdevs_operational": 2, 00:27:10.348 "base_bdevs_list": [ 00:27:10.348 { 00:27:10.348 "name": "BaseBdev1", 00:27:10.348 "uuid": "455527d6-b3be-5afd-a09d-ad8676d11a78", 00:27:10.348 "is_configured": true, 00:27:10.348 "data_offset": 0, 00:27:10.348 "data_size": 65536 00:27:10.348 }, 00:27:10.348 { 00:27:10.348 "name": "BaseBdev2", 00:27:10.348 "uuid": "12c82eec-158f-5db9-a039-d1debe7c119d", 00:27:10.348 "is_configured": true, 00:27:10.348 "data_offset": 0, 00:27:10.348 "data_size": 65536 00:27:10.348 } 00:27:10.348 ] 00:27:10.348 }' 00:27:10.348 04:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:10.348 04:22:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.915 04:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:10.915 04:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:27:10.915 [2024-07-23 04:22:19.667751] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:10.915 04:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:27:10.915 04:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.915 04:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:11.174 04:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:27:11.174 04:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:27:11.174 04:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:27:11.174 04:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:27:11.174 04:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:11.174 04:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:11.174 04:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:11.174 04:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:11.174 04:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:11.174 04:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:11.174 04:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:27:11.174 04:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:11.174 04:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:11.174 04:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:11.433 [2024-07-23 04:22:20.124763] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:27:11.433 /dev/nbd0 00:27:11.433 04:22:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:11.433 04:22:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:11.433 04:22:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:11.433 04:22:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:27:11.433 04:22:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:11.433 04:22:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:11.433 04:22:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:11.433 04:22:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:27:11.433 04:22:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:11.433 04:22:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:11.433 04:22:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:11.433 1+0 records in 00:27:11.433 1+0 records out 00:27:11.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268152 s, 15.3 MB/s 00:27:11.433 04:22:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:11.433 04:22:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:27:11.433 04:22:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:11.433 04:22:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:11.433 04:22:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:27:11.433 04:22:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:11.433 04:22:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:11.433 04:22:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:27:11.433 04:22:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:27:11.433 04:22:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:27:16.697 65536+0 records in 00:27:16.697 65536+0 records out 00:27:16.697 33554432 bytes (34 MB, 32 MiB) copied, 4.56528 s, 7.3 MB/s 00:27:16.697 04:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:16.697 04:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:16.697 04:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:16.697 04:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:16.697 04:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:27:16.697 04:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:16.697 04:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:16.697 04:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:16.697 [2024-07-23 04:22:25.000780] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:16.697 04:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:16.697 04:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:16.697 04:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:16.697 04:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:16.697 04:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:16.697 04:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:16.697 04:22:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:16.697 04:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:16.697 [2024-07-23 04:22:25.217587] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:16.697 04:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:16.697 04:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:16.697 04:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:16.697 04:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:16.697 04:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:16.697 04:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:16.697 04:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:16.697 04:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:16.697 04:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:16.697 04:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:16.697 04:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.697 04:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:16.697 04:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:16.697 "name": "raid_bdev1", 00:27:16.697 "uuid": "c40561c4-45c3-4d76-8bcd-a9109de42e2f", 00:27:16.697 "strip_size_kb": 0, 00:27:16.697 "state": "online", 00:27:16.697 "raid_level": "raid1", 00:27:16.697 "superblock": false, 00:27:16.697 "num_base_bdevs": 2, 00:27:16.697 "num_base_bdevs_discovered": 1, 00:27:16.697 "num_base_bdevs_operational": 1, 00:27:16.697 "base_bdevs_list": [ 00:27:16.697 { 00:27:16.697 "name": null, 00:27:16.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:16.697 "is_configured": false, 00:27:16.697 "data_offset": 0, 00:27:16.697 "data_size": 65536 00:27:16.697 }, 00:27:16.697 { 00:27:16.697 "name": "BaseBdev2", 00:27:16.697 "uuid": "12c82eec-158f-5db9-a039-d1debe7c119d", 00:27:16.697 "is_configured": true, 00:27:16.697 "data_offset": 0, 00:27:16.697 "data_size": 65536 00:27:16.697 } 00:27:16.697 ] 00:27:16.697 }' 00:27:16.697 04:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:16.697 04:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.632 04:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:17.632 [2024-07-23 04:22:26.248400] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:17.632 [2024-07-23 04:22:26.275607] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d14400 00:27:17.632 [2024-07-23 04:22:26.277955] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:17.632 04:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:27:18.567 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:18.567 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:18.567 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:18.567 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:18.567 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:18.567 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.567 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.826 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:18.826 "name": "raid_bdev1", 00:27:18.826 "uuid": "c40561c4-45c3-4d76-8bcd-a9109de42e2f", 00:27:18.826 "strip_size_kb": 0, 00:27:18.826 "state": "online", 00:27:18.826 "raid_level": "raid1", 00:27:18.826 "superblock": false, 00:27:18.826 "num_base_bdevs": 2, 00:27:18.826 "num_base_bdevs_discovered": 2, 00:27:18.826 "num_base_bdevs_operational": 2, 00:27:18.826 "process": { 00:27:18.826 "type": "rebuild", 00:27:18.826 "target": "spare", 00:27:18.826 "progress": { 00:27:18.826 "blocks": 24576, 00:27:18.826 "percent": 37 00:27:18.826 } 00:27:18.826 }, 00:27:18.826 "base_bdevs_list": [ 00:27:18.826 { 00:27:18.826 "name": "spare", 00:27:18.826 "uuid": "2731f031-a67f-5d59-b37c-d26339fcdb02", 00:27:18.826 "is_configured": true, 00:27:18.826 "data_offset": 0, 00:27:18.826 "data_size": 65536 00:27:18.826 }, 00:27:18.826 { 00:27:18.826 "name": "BaseBdev2", 00:27:18.826 "uuid": "12c82eec-158f-5db9-a039-d1debe7c119d", 00:27:18.826 "is_configured": true, 00:27:18.826 "data_offset": 0, 00:27:18.826 "data_size": 65536 00:27:18.826 } 00:27:18.826 ] 00:27:18.826 }' 00:27:18.826 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:18.826 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:18.826 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:19.084 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:19.084 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:19.084 [2024-07-23 04:22:27.823357] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:19.343 [2024-07-23 04:22:27.891008] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:19.343 [2024-07-23 04:22:27.891069] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:19.343 [2024-07-23 04:22:27.891089] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:19.343 [2024-07-23 04:22:27.891113] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:19.343 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:19.343 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:19.343 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:19.343 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:19.343 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:19.343 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:19.343 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:19.343 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:19.343 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:19.343 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:19.343 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.343 04:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:19.601 04:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:19.601 "name": "raid_bdev1", 00:27:19.601 "uuid": "c40561c4-45c3-4d76-8bcd-a9109de42e2f", 00:27:19.601 "strip_size_kb": 0, 00:27:19.601 "state": "online", 00:27:19.601 "raid_level": "raid1", 00:27:19.601 "superblock": false, 00:27:19.601 "num_base_bdevs": 2, 00:27:19.601 "num_base_bdevs_discovered": 1, 00:27:19.601 "num_base_bdevs_operational": 1, 00:27:19.601 "base_bdevs_list": [ 00:27:19.601 { 00:27:19.601 "name": null, 00:27:19.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:19.601 "is_configured": false, 00:27:19.601 "data_offset": 0, 00:27:19.601 "data_size": 65536 00:27:19.601 }, 00:27:19.601 { 00:27:19.601 "name": "BaseBdev2", 00:27:19.601 "uuid": "12c82eec-158f-5db9-a039-d1debe7c119d", 00:27:19.601 "is_configured": true, 00:27:19.601 "data_offset": 0, 00:27:19.601 "data_size": 65536 00:27:19.601 } 00:27:19.601 ] 00:27:19.601 }' 00:27:19.601 04:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:19.601 04:22:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.167 04:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:20.167 04:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:20.167 04:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:20.167 04:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:20.167 04:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:20.167 04:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.167 04:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:20.426 04:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:20.426 "name": "raid_bdev1", 00:27:20.426 "uuid": "c40561c4-45c3-4d76-8bcd-a9109de42e2f", 00:27:20.426 "strip_size_kb": 0, 00:27:20.426 "state": "online", 00:27:20.426 "raid_level": "raid1", 00:27:20.426 "superblock": false, 00:27:20.426 "num_base_bdevs": 2, 00:27:20.426 "num_base_bdevs_discovered": 1, 00:27:20.426 "num_base_bdevs_operational": 1, 00:27:20.426 "base_bdevs_list": [ 00:27:20.426 { 00:27:20.426 "name": null, 00:27:20.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:20.426 "is_configured": false, 00:27:20.426 "data_offset": 0, 00:27:20.426 "data_size": 65536 00:27:20.426 }, 00:27:20.426 { 00:27:20.426 "name": "BaseBdev2", 00:27:20.426 "uuid": "12c82eec-158f-5db9-a039-d1debe7c119d", 00:27:20.426 "is_configured": true, 00:27:20.426 "data_offset": 0, 00:27:20.426 "data_size": 65536 00:27:20.426 } 00:27:20.426 ] 00:27:20.426 }' 00:27:20.426 04:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:20.426 04:22:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:20.426 04:22:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:20.426 04:22:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:20.426 04:22:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:20.684 [2024-07-23 04:22:29.289161] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:20.684 [2024-07-23 04:22:29.312319] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d144d0 00:27:20.684 [2024-07-23 04:22:29.314675] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:20.684 04:22:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:21.620 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:21.620 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:21.620 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:21.620 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:21.620 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:21.620 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:21.620 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:21.880 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:21.880 "name": "raid_bdev1", 00:27:21.880 "uuid": "c40561c4-45c3-4d76-8bcd-a9109de42e2f", 00:27:21.880 "strip_size_kb": 0, 00:27:21.880 "state": "online", 00:27:21.880 "raid_level": "raid1", 00:27:21.880 "superblock": false, 00:27:21.880 "num_base_bdevs": 2, 00:27:21.880 "num_base_bdevs_discovered": 2, 00:27:21.880 "num_base_bdevs_operational": 2, 00:27:21.880 "process": { 00:27:21.880 "type": "rebuild", 00:27:21.880 "target": "spare", 00:27:21.880 "progress": { 00:27:21.880 "blocks": 24576, 00:27:21.880 "percent": 37 00:27:21.880 } 00:27:21.880 }, 00:27:21.880 "base_bdevs_list": [ 00:27:21.880 { 00:27:21.880 "name": "spare", 00:27:21.880 "uuid": "2731f031-a67f-5d59-b37c-d26339fcdb02", 00:27:21.880 "is_configured": true, 00:27:21.880 "data_offset": 0, 00:27:21.880 "data_size": 65536 00:27:21.880 }, 00:27:21.880 { 00:27:21.880 "name": "BaseBdev2", 00:27:21.880 "uuid": "12c82eec-158f-5db9-a039-d1debe7c119d", 00:27:21.880 "is_configured": true, 00:27:21.880 "data_offset": 0, 00:27:21.880 "data_size": 65536 00:27:21.880 } 00:27:21.880 ] 00:27:21.880 }' 00:27:21.880 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:21.880 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:21.880 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:21.880 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:21.880 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:27:21.880 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:27:21.880 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:27:21.880 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:27:21.880 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=836 00:27:21.880 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:21.880 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:21.880 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:21.880 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:21.880 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:21.880 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:21.880 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.140 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.140 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:22.140 "name": "raid_bdev1", 00:27:22.140 "uuid": "c40561c4-45c3-4d76-8bcd-a9109de42e2f", 00:27:22.140 "strip_size_kb": 0, 00:27:22.140 "state": "online", 00:27:22.140 "raid_level": "raid1", 00:27:22.140 "superblock": false, 00:27:22.140 "num_base_bdevs": 2, 00:27:22.140 "num_base_bdevs_discovered": 2, 00:27:22.140 "num_base_bdevs_operational": 2, 00:27:22.140 "process": { 00:27:22.140 "type": "rebuild", 00:27:22.140 "target": "spare", 00:27:22.140 "progress": { 00:27:22.140 "blocks": 30720, 00:27:22.140 "percent": 46 00:27:22.140 } 00:27:22.140 }, 00:27:22.140 "base_bdevs_list": [ 00:27:22.140 { 00:27:22.140 "name": "spare", 00:27:22.140 "uuid": "2731f031-a67f-5d59-b37c-d26339fcdb02", 00:27:22.140 "is_configured": true, 00:27:22.140 "data_offset": 0, 00:27:22.140 "data_size": 65536 00:27:22.140 }, 00:27:22.140 { 00:27:22.140 "name": "BaseBdev2", 00:27:22.140 "uuid": "12c82eec-158f-5db9-a039-d1debe7c119d", 00:27:22.140 "is_configured": true, 00:27:22.140 "data_offset": 0, 00:27:22.140 "data_size": 65536 00:27:22.140 } 00:27:22.140 ] 00:27:22.140 }' 00:27:22.140 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:22.398 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:22.398 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:22.398 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:22.398 04:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:23.333 04:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:23.333 04:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:23.333 04:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:23.333 04:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:23.333 04:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:23.333 04:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:23.333 04:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.333 04:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:23.591 04:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:23.591 "name": "raid_bdev1", 00:27:23.591 "uuid": "c40561c4-45c3-4d76-8bcd-a9109de42e2f", 00:27:23.591 "strip_size_kb": 0, 00:27:23.591 "state": "online", 00:27:23.591 "raid_level": "raid1", 00:27:23.591 "superblock": false, 00:27:23.591 "num_base_bdevs": 2, 00:27:23.591 "num_base_bdevs_discovered": 2, 00:27:23.591 "num_base_bdevs_operational": 2, 00:27:23.591 "process": { 00:27:23.591 "type": "rebuild", 00:27:23.591 "target": "spare", 00:27:23.591 "progress": { 00:27:23.591 "blocks": 57344, 00:27:23.591 "percent": 87 00:27:23.591 } 00:27:23.591 }, 00:27:23.591 "base_bdevs_list": [ 00:27:23.591 { 00:27:23.591 "name": "spare", 00:27:23.591 "uuid": "2731f031-a67f-5d59-b37c-d26339fcdb02", 00:27:23.591 "is_configured": true, 00:27:23.591 "data_offset": 0, 00:27:23.592 "data_size": 65536 00:27:23.592 }, 00:27:23.592 { 00:27:23.592 "name": "BaseBdev2", 00:27:23.592 "uuid": "12c82eec-158f-5db9-a039-d1debe7c119d", 00:27:23.592 "is_configured": true, 00:27:23.592 "data_offset": 0, 00:27:23.592 "data_size": 65536 00:27:23.592 } 00:27:23.592 ] 00:27:23.592 }' 00:27:23.592 04:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:23.592 04:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:23.592 04:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:23.592 04:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:23.592 04:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:23.851 [2024-07-23 04:22:32.540531] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:23.851 [2024-07-23 04:22:32.540607] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:23.851 [2024-07-23 04:22:32.540659] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:24.786 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:24.786 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:24.786 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:24.786 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:24.786 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:24.786 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:24.786 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:24.786 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.786 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:24.786 "name": "raid_bdev1", 00:27:24.786 "uuid": "c40561c4-45c3-4d76-8bcd-a9109de42e2f", 00:27:24.786 "strip_size_kb": 0, 00:27:24.786 "state": "online", 00:27:24.786 "raid_level": "raid1", 00:27:24.786 "superblock": false, 00:27:24.786 "num_base_bdevs": 2, 00:27:24.786 "num_base_bdevs_discovered": 2, 00:27:24.786 "num_base_bdevs_operational": 2, 00:27:24.786 "base_bdevs_list": [ 00:27:24.786 { 00:27:24.786 "name": "spare", 00:27:24.786 "uuid": "2731f031-a67f-5d59-b37c-d26339fcdb02", 00:27:24.786 "is_configured": true, 00:27:24.786 "data_offset": 0, 00:27:24.786 "data_size": 65536 00:27:24.786 }, 00:27:24.786 { 00:27:24.786 "name": "BaseBdev2", 00:27:24.786 "uuid": "12c82eec-158f-5db9-a039-d1debe7c119d", 00:27:24.786 "is_configured": true, 00:27:24.786 "data_offset": 0, 00:27:24.786 "data_size": 65536 00:27:24.786 } 00:27:24.786 ] 00:27:24.786 }' 00:27:24.786 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:24.786 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:24.786 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:25.045 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:25.045 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:27:25.045 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:25.045 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:25.045 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:25.045 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:25.045 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:25.045 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.045 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.304 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:25.304 "name": "raid_bdev1", 00:27:25.304 "uuid": "c40561c4-45c3-4d76-8bcd-a9109de42e2f", 00:27:25.304 "strip_size_kb": 0, 00:27:25.304 "state": "online", 00:27:25.304 "raid_level": "raid1", 00:27:25.304 "superblock": false, 00:27:25.304 "num_base_bdevs": 2, 00:27:25.304 "num_base_bdevs_discovered": 2, 00:27:25.304 "num_base_bdevs_operational": 2, 00:27:25.304 "base_bdevs_list": [ 00:27:25.304 { 00:27:25.304 "name": "spare", 00:27:25.304 "uuid": "2731f031-a67f-5d59-b37c-d26339fcdb02", 00:27:25.304 "is_configured": true, 00:27:25.304 "data_offset": 0, 00:27:25.304 "data_size": 65536 00:27:25.304 }, 00:27:25.304 { 00:27:25.304 "name": "BaseBdev2", 00:27:25.304 "uuid": "12c82eec-158f-5db9-a039-d1debe7c119d", 00:27:25.304 "is_configured": true, 00:27:25.304 "data_offset": 0, 00:27:25.304 "data_size": 65536 00:27:25.304 } 00:27:25.304 ] 00:27:25.304 }' 00:27:25.304 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:25.304 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:25.304 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:25.304 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:25.304 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:25.304 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:25.304 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:25.304 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:25.304 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:25.304 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:25.304 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:25.304 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:25.304 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:25.304 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:25.304 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.304 04:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.563 04:22:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:25.563 "name": "raid_bdev1", 00:27:25.563 "uuid": "c40561c4-45c3-4d76-8bcd-a9109de42e2f", 00:27:25.563 "strip_size_kb": 0, 00:27:25.563 "state": "online", 00:27:25.563 "raid_level": "raid1", 00:27:25.563 "superblock": false, 00:27:25.563 "num_base_bdevs": 2, 00:27:25.563 "num_base_bdevs_discovered": 2, 00:27:25.563 "num_base_bdevs_operational": 2, 00:27:25.563 "base_bdevs_list": [ 00:27:25.563 { 00:27:25.563 "name": "spare", 00:27:25.563 "uuid": "2731f031-a67f-5d59-b37c-d26339fcdb02", 00:27:25.563 "is_configured": true, 00:27:25.563 "data_offset": 0, 00:27:25.563 "data_size": 65536 00:27:25.563 }, 00:27:25.563 { 00:27:25.563 "name": "BaseBdev2", 00:27:25.563 "uuid": "12c82eec-158f-5db9-a039-d1debe7c119d", 00:27:25.563 "is_configured": true, 00:27:25.563 "data_offset": 0, 00:27:25.563 "data_size": 65536 00:27:25.563 } 00:27:25.563 ] 00:27:25.563 }' 00:27:25.563 04:22:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:25.563 04:22:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:26.131 04:22:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:26.390 [2024-07-23 04:22:34.928562] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:26.390 [2024-07-23 04:22:34.928600] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:26.390 [2024-07-23 04:22:34.928690] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:26.390 [2024-07-23 04:22:34.928781] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:26.390 [2024-07-23 04:22:34.928799] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:27:26.390 04:22:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.390 04:22:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:27:26.649 04:22:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:27:26.649 04:22:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:27:26.649 04:22:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:27:26.649 04:22:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:26.649 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:26.649 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:26.649 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:26.649 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:26.649 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:26.649 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:27:26.649 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:26.649 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:26.649 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:26.649 /dev/nbd0 00:27:26.649 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:26.649 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:26.649 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:26.649 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:27:26.649 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:26.649 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:26.649 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:26.908 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:27:26.908 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:26.908 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:26.908 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:26.908 1+0 records in 00:27:26.908 1+0 records out 00:27:26.908 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260558 s, 15.7 MB/s 00:27:26.908 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:26.908 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:27:26.908 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:26.908 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:26.908 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:27:26.908 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:26.908 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:26.908 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:26.908 /dev/nbd1 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:27.167 1+0 records in 00:27:27.167 1+0 records out 00:27:27.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346113 s, 11.8 MB/s 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:27.167 04:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:27.425 04:22:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:27.425 04:22:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:27.425 04:22:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:27.425 04:22:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:27.425 04:22:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:27.425 04:22:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:27.425 04:22:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:27.425 04:22:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:27.425 04:22:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:27.425 04:22:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:27.683 04:22:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:27.683 04:22:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:27.683 04:22:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:27.683 04:22:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:27.683 04:22:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:27.683 04:22:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:27.683 04:22:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:27.683 04:22:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:27.683 04:22:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:27:27.683 04:22:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 2759608 00:27:27.683 04:22:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 2759608 ']' 00:27:27.683 04:22:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 2759608 00:27:27.683 04:22:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:27:27.683 04:22:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:27.683 04:22:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2759608 00:27:27.942 04:22:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:27.942 04:22:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:27.942 04:22:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2759608' 00:27:27.942 killing process with pid 2759608 00:27:27.942 04:22:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 2759608 00:27:27.942 Received shutdown signal, test time was about 60.000000 seconds 00:27:27.942 00:27:27.942 Latency(us) 00:27:27.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.942 =================================================================================================================== 00:27:27.942 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:27.942 [2024-07-23 04:22:36.493364] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:27.942 04:22:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 2759608 00:27:28.202 [2024-07-23 04:22:36.824589] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:27:30.106 00:27:30.106 real 0m22.928s 00:27:30.106 user 0m30.476s 00:27:30.106 sys 0m4.430s 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.106 ************************************ 00:27:30.106 END TEST raid_rebuild_test 00:27:30.106 ************************************ 00:27:30.106 04:22:38 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:30.106 04:22:38 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:27:30.106 04:22:38 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:27:30.106 04:22:38 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:30.106 04:22:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:30.106 ************************************ 00:27:30.106 START TEST raid_rebuild_test_sb 00:27:30.106 ************************************ 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=2763671 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 2763671 /var/tmp/spdk-raid.sock 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2763671 ']' 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:30.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:30.106 04:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:30.106 [2024-07-23 04:22:38.739313] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:27:30.106 [2024-07-23 04:22:38.739436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2763671 ] 00:27:30.106 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:30.106 Zero copy mechanism will not be used. 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:30.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.106 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:30.365 [2024-07-23 04:22:38.961529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.624 [2024-07-23 04:22:39.220493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.882 [2024-07-23 04:22:39.544989] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:30.882 [2024-07-23 04:22:39.545025] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:31.141 04:22:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:31.141 04:22:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:27:31.141 04:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:31.141 04:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:31.399 BaseBdev1_malloc 00:27:31.399 04:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:31.658 [2024-07-23 04:22:40.200008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:31.658 [2024-07-23 04:22:40.200074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:31.658 [2024-07-23 04:22:40.200103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:27:31.658 [2024-07-23 04:22:40.200126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:31.658 [2024-07-23 04:22:40.202892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:31.658 [2024-07-23 04:22:40.202930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:31.658 BaseBdev1 00:27:31.658 04:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:31.658 04:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:31.916 BaseBdev2_malloc 00:27:31.916 04:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:31.916 [2024-07-23 04:22:40.691328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:31.916 [2024-07-23 04:22:40.691388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:31.916 [2024-07-23 04:22:40.691414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:27:31.916 [2024-07-23 04:22:40.691435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:31.916 [2024-07-23 04:22:40.694182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:31.916 [2024-07-23 04:22:40.694219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:31.916 BaseBdev2 00:27:32.175 04:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:32.434 spare_malloc 00:27:32.434 04:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:32.434 spare_delay 00:27:32.434 04:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:32.693 [2024-07-23 04:22:41.425008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:32.693 [2024-07-23 04:22:41.425066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:32.693 [2024-07-23 04:22:41.425094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:27:32.693 [2024-07-23 04:22:41.425112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:32.693 [2024-07-23 04:22:41.427918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:32.693 [2024-07-23 04:22:41.427956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:32.693 spare 00:27:32.693 04:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:32.951 [2024-07-23 04:22:41.649640] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:32.951 [2024-07-23 04:22:41.651986] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:32.951 [2024-07-23 04:22:41.652208] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:27:32.951 [2024-07-23 04:22:41.652234] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:32.951 [2024-07-23 04:22:41.652597] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:27:32.951 [2024-07-23 04:22:41.652868] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:27:32.952 [2024-07-23 04:22:41.652884] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:27:32.952 [2024-07-23 04:22:41.653076] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:32.952 04:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:32.952 04:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:32.952 04:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:32.952 04:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:32.952 04:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:32.952 04:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:32.952 04:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:32.952 04:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:32.952 04:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:32.952 04:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:32.952 04:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.952 04:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.210 04:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:33.210 "name": "raid_bdev1", 00:27:33.210 "uuid": "4ddc86e5-d8e5-4009-bd94-96bb9aaf0c29", 00:27:33.210 "strip_size_kb": 0, 00:27:33.210 "state": "online", 00:27:33.210 "raid_level": "raid1", 00:27:33.210 "superblock": true, 00:27:33.210 "num_base_bdevs": 2, 00:27:33.210 "num_base_bdevs_discovered": 2, 00:27:33.210 "num_base_bdevs_operational": 2, 00:27:33.210 "base_bdevs_list": [ 00:27:33.210 { 00:27:33.210 "name": "BaseBdev1", 00:27:33.210 "uuid": "1f841e58-1980-59e6-8112-104083842cc6", 00:27:33.210 "is_configured": true, 00:27:33.210 "data_offset": 2048, 00:27:33.210 "data_size": 63488 00:27:33.210 }, 00:27:33.210 { 00:27:33.210 "name": "BaseBdev2", 00:27:33.210 "uuid": "1900467b-895a-5c13-b31d-2868ab26c9e0", 00:27:33.210 "is_configured": true, 00:27:33.210 "data_offset": 2048, 00:27:33.210 "data_size": 63488 00:27:33.210 } 00:27:33.210 ] 00:27:33.210 }' 00:27:33.210 04:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:33.210 04:22:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:33.777 04:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:33.777 04:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:27:34.036 [2024-07-23 04:22:42.656669] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:34.036 04:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:27:34.036 04:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.036 04:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:34.295 04:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:27:34.295 04:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:27:34.295 04:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:27:34.295 04:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:27:34.295 04:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:34.295 04:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:34.295 04:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:34.295 04:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:34.295 04:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:34.295 04:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:34.295 04:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:27:34.295 04:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:34.295 04:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:34.295 04:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:34.554 [2024-07-23 04:22:43.121719] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:27:34.554 /dev/nbd0 00:27:34.554 04:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:34.554 04:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:34.554 04:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:34.554 04:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:27:34.554 04:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:34.554 04:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:34.554 04:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:34.554 04:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:27:34.554 04:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:34.554 04:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:34.554 04:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:34.554 1+0 records in 00:27:34.554 1+0 records out 00:27:34.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241474 s, 17.0 MB/s 00:27:34.554 04:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:34.554 04:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:27:34.554 04:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:34.554 04:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:34.554 04:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:27:34.554 04:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:34.554 04:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:34.554 04:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:27:34.554 04:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:27:34.554 04:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:27:39.819 63488+0 records in 00:27:39.819 63488+0 records out 00:27:39.819 32505856 bytes (33 MB, 31 MiB) copied, 5.20799 s, 6.2 MB/s 00:27:39.819 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:39.819 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:39.819 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:39.819 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:39.819 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:27:39.819 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:39.819 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:40.077 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:40.077 [2024-07-23 04:22:48.636460] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:40.077 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:40.077 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:40.077 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:40.077 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:40.077 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:40.077 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:40.077 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:40.077 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:40.077 [2024-07-23 04:22:48.857164] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:40.335 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:40.335 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:40.335 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:40.335 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:40.335 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:40.335 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:40.335 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:40.335 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:40.335 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:40.335 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:40.335 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:40.335 04:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.335 04:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:40.335 "name": "raid_bdev1", 00:27:40.335 "uuid": "4ddc86e5-d8e5-4009-bd94-96bb9aaf0c29", 00:27:40.335 "strip_size_kb": 0, 00:27:40.335 "state": "online", 00:27:40.335 "raid_level": "raid1", 00:27:40.335 "superblock": true, 00:27:40.335 "num_base_bdevs": 2, 00:27:40.335 "num_base_bdevs_discovered": 1, 00:27:40.335 "num_base_bdevs_operational": 1, 00:27:40.335 "base_bdevs_list": [ 00:27:40.335 { 00:27:40.335 "name": null, 00:27:40.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:40.335 "is_configured": false, 00:27:40.335 "data_offset": 2048, 00:27:40.335 "data_size": 63488 00:27:40.335 }, 00:27:40.335 { 00:27:40.335 "name": "BaseBdev2", 00:27:40.335 "uuid": "1900467b-895a-5c13-b31d-2868ab26c9e0", 00:27:40.335 "is_configured": true, 00:27:40.335 "data_offset": 2048, 00:27:40.335 "data_size": 63488 00:27:40.335 } 00:27:40.335 ] 00:27:40.335 }' 00:27:40.335 04:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:40.335 04:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.902 04:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:41.162 [2024-07-23 04:22:49.867899] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:41.162 [2024-07-23 04:22:49.896337] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caaba0 00:27:41.162 [2024-07-23 04:22:49.898669] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:41.162 04:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:27:42.540 04:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:42.540 04:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:42.540 04:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:42.540 04:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:42.540 04:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:42.540 04:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.540 04:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.540 04:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:42.540 "name": "raid_bdev1", 00:27:42.540 "uuid": "4ddc86e5-d8e5-4009-bd94-96bb9aaf0c29", 00:27:42.540 "strip_size_kb": 0, 00:27:42.540 "state": "online", 00:27:42.540 "raid_level": "raid1", 00:27:42.540 "superblock": true, 00:27:42.540 "num_base_bdevs": 2, 00:27:42.540 "num_base_bdevs_discovered": 2, 00:27:42.540 "num_base_bdevs_operational": 2, 00:27:42.540 "process": { 00:27:42.540 "type": "rebuild", 00:27:42.540 "target": "spare", 00:27:42.540 "progress": { 00:27:42.540 "blocks": 24576, 00:27:42.540 "percent": 38 00:27:42.540 } 00:27:42.540 }, 00:27:42.540 "base_bdevs_list": [ 00:27:42.540 { 00:27:42.540 "name": "spare", 00:27:42.540 "uuid": "d4c27400-b217-512d-af2d-a8bce5f3e876", 00:27:42.540 "is_configured": true, 00:27:42.540 "data_offset": 2048, 00:27:42.540 "data_size": 63488 00:27:42.540 }, 00:27:42.540 { 00:27:42.540 "name": "BaseBdev2", 00:27:42.540 "uuid": "1900467b-895a-5c13-b31d-2868ab26c9e0", 00:27:42.540 "is_configured": true, 00:27:42.540 "data_offset": 2048, 00:27:42.540 "data_size": 63488 00:27:42.540 } 00:27:42.540 ] 00:27:42.540 }' 00:27:42.540 04:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:42.540 04:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:42.540 04:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:42.540 04:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:42.540 04:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:42.799 [2024-07-23 04:22:51.439572] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:42.799 [2024-07-23 04:22:51.511674] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:42.799 [2024-07-23 04:22:51.511747] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:42.799 [2024-07-23 04:22:51.511770] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:42.799 [2024-07-23 04:22:51.511794] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:42.799 04:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:42.799 04:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:42.799 04:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:42.799 04:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:42.799 04:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:42.799 04:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:42.799 04:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:42.799 04:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:42.799 04:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:42.799 04:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:42.799 04:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.799 04:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:43.056 04:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:43.056 "name": "raid_bdev1", 00:27:43.056 "uuid": "4ddc86e5-d8e5-4009-bd94-96bb9aaf0c29", 00:27:43.056 "strip_size_kb": 0, 00:27:43.056 "state": "online", 00:27:43.056 "raid_level": "raid1", 00:27:43.056 "superblock": true, 00:27:43.056 "num_base_bdevs": 2, 00:27:43.056 "num_base_bdevs_discovered": 1, 00:27:43.056 "num_base_bdevs_operational": 1, 00:27:43.056 "base_bdevs_list": [ 00:27:43.056 { 00:27:43.057 "name": null, 00:27:43.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:43.057 "is_configured": false, 00:27:43.057 "data_offset": 2048, 00:27:43.057 "data_size": 63488 00:27:43.057 }, 00:27:43.057 { 00:27:43.057 "name": "BaseBdev2", 00:27:43.057 "uuid": "1900467b-895a-5c13-b31d-2868ab26c9e0", 00:27:43.057 "is_configured": true, 00:27:43.057 "data_offset": 2048, 00:27:43.057 "data_size": 63488 00:27:43.057 } 00:27:43.057 ] 00:27:43.057 }' 00:27:43.057 04:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:43.057 04:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:43.622 04:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:43.622 04:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:43.622 04:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:43.622 04:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:43.622 04:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:43.622 04:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.622 04:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:43.881 04:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:43.881 "name": "raid_bdev1", 00:27:43.881 "uuid": "4ddc86e5-d8e5-4009-bd94-96bb9aaf0c29", 00:27:43.881 "strip_size_kb": 0, 00:27:43.881 "state": "online", 00:27:43.881 "raid_level": "raid1", 00:27:43.881 "superblock": true, 00:27:43.881 "num_base_bdevs": 2, 00:27:43.881 "num_base_bdevs_discovered": 1, 00:27:43.881 "num_base_bdevs_operational": 1, 00:27:43.881 "base_bdevs_list": [ 00:27:43.881 { 00:27:43.881 "name": null, 00:27:43.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:43.881 "is_configured": false, 00:27:43.881 "data_offset": 2048, 00:27:43.881 "data_size": 63488 00:27:43.881 }, 00:27:43.881 { 00:27:43.881 "name": "BaseBdev2", 00:27:43.881 "uuid": "1900467b-895a-5c13-b31d-2868ab26c9e0", 00:27:43.881 "is_configured": true, 00:27:43.881 "data_offset": 2048, 00:27:43.881 "data_size": 63488 00:27:43.881 } 00:27:43.881 ] 00:27:43.881 }' 00:27:43.881 04:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:43.881 04:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:43.881 04:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:44.139 04:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:44.139 04:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:44.139 [2024-07-23 04:22:52.896437] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:44.139 [2024-07-23 04:22:52.922402] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caac70 00:27:44.397 [2024-07-23 04:22:52.924726] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:44.397 04:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:45.360 04:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:45.360 04:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:45.360 04:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:45.360 04:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:45.360 04:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:45.360 04:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.360 04:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.619 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:45.619 "name": "raid_bdev1", 00:27:45.619 "uuid": "4ddc86e5-d8e5-4009-bd94-96bb9aaf0c29", 00:27:45.619 "strip_size_kb": 0, 00:27:45.619 "state": "online", 00:27:45.619 "raid_level": "raid1", 00:27:45.619 "superblock": true, 00:27:45.619 "num_base_bdevs": 2, 00:27:45.619 "num_base_bdevs_discovered": 2, 00:27:45.619 "num_base_bdevs_operational": 2, 00:27:45.619 "process": { 00:27:45.619 "type": "rebuild", 00:27:45.619 "target": "spare", 00:27:45.619 "progress": { 00:27:45.619 "blocks": 24576, 00:27:45.619 "percent": 38 00:27:45.619 } 00:27:45.619 }, 00:27:45.619 "base_bdevs_list": [ 00:27:45.619 { 00:27:45.619 "name": "spare", 00:27:45.619 "uuid": "d4c27400-b217-512d-af2d-a8bce5f3e876", 00:27:45.619 "is_configured": true, 00:27:45.619 "data_offset": 2048, 00:27:45.619 "data_size": 63488 00:27:45.619 }, 00:27:45.619 { 00:27:45.619 "name": "BaseBdev2", 00:27:45.619 "uuid": "1900467b-895a-5c13-b31d-2868ab26c9e0", 00:27:45.619 "is_configured": true, 00:27:45.619 "data_offset": 2048, 00:27:45.619 "data_size": 63488 00:27:45.619 } 00:27:45.619 ] 00:27:45.619 }' 00:27:45.619 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:45.619 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:45.619 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:45.619 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:45.619 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:27:45.619 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:27:45.619 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:27:45.619 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:27:45.619 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:27:45.619 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:27:45.619 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=860 00:27:45.619 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:45.619 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:45.619 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:45.619 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:45.619 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:45.619 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:45.619 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.619 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.877 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:45.877 "name": "raid_bdev1", 00:27:45.877 "uuid": "4ddc86e5-d8e5-4009-bd94-96bb9aaf0c29", 00:27:45.877 "strip_size_kb": 0, 00:27:45.877 "state": "online", 00:27:45.877 "raid_level": "raid1", 00:27:45.877 "superblock": true, 00:27:45.877 "num_base_bdevs": 2, 00:27:45.877 "num_base_bdevs_discovered": 2, 00:27:45.877 "num_base_bdevs_operational": 2, 00:27:45.877 "process": { 00:27:45.877 "type": "rebuild", 00:27:45.877 "target": "spare", 00:27:45.877 "progress": { 00:27:45.877 "blocks": 30720, 00:27:45.877 "percent": 48 00:27:45.877 } 00:27:45.877 }, 00:27:45.877 "base_bdevs_list": [ 00:27:45.877 { 00:27:45.877 "name": "spare", 00:27:45.877 "uuid": "d4c27400-b217-512d-af2d-a8bce5f3e876", 00:27:45.877 "is_configured": true, 00:27:45.877 "data_offset": 2048, 00:27:45.877 "data_size": 63488 00:27:45.877 }, 00:27:45.877 { 00:27:45.877 "name": "BaseBdev2", 00:27:45.877 "uuid": "1900467b-895a-5c13-b31d-2868ab26c9e0", 00:27:45.877 "is_configured": true, 00:27:45.877 "data_offset": 2048, 00:27:45.877 "data_size": 63488 00:27:45.877 } 00:27:45.877 ] 00:27:45.877 }' 00:27:45.877 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:45.877 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:45.877 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:45.877 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:45.877 04:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:47.253 04:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:47.253 04:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:47.253 04:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:47.253 04:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:47.253 04:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:47.253 04:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:47.253 04:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.253 04:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:47.253 04:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:47.253 "name": "raid_bdev1", 00:27:47.253 "uuid": "4ddc86e5-d8e5-4009-bd94-96bb9aaf0c29", 00:27:47.253 "strip_size_kb": 0, 00:27:47.253 "state": "online", 00:27:47.253 "raid_level": "raid1", 00:27:47.253 "superblock": true, 00:27:47.253 "num_base_bdevs": 2, 00:27:47.253 "num_base_bdevs_discovered": 2, 00:27:47.253 "num_base_bdevs_operational": 2, 00:27:47.253 "process": { 00:27:47.253 "type": "rebuild", 00:27:47.253 "target": "spare", 00:27:47.253 "progress": { 00:27:47.253 "blocks": 57344, 00:27:47.253 "percent": 90 00:27:47.253 } 00:27:47.253 }, 00:27:47.253 "base_bdevs_list": [ 00:27:47.253 { 00:27:47.253 "name": "spare", 00:27:47.253 "uuid": "d4c27400-b217-512d-af2d-a8bce5f3e876", 00:27:47.253 "is_configured": true, 00:27:47.253 "data_offset": 2048, 00:27:47.253 "data_size": 63488 00:27:47.253 }, 00:27:47.253 { 00:27:47.253 "name": "BaseBdev2", 00:27:47.253 "uuid": "1900467b-895a-5c13-b31d-2868ab26c9e0", 00:27:47.253 "is_configured": true, 00:27:47.253 "data_offset": 2048, 00:27:47.253 "data_size": 63488 00:27:47.253 } 00:27:47.253 ] 00:27:47.253 }' 00:27:47.253 04:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:47.253 04:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:47.253 04:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:47.253 04:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:47.253 04:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:47.512 [2024-07-23 04:22:56.049674] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:47.512 [2024-07-23 04:22:56.049747] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:47.512 [2024-07-23 04:22:56.049846] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:48.448 04:22:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:48.448 04:22:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:48.448 04:22:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:48.448 04:22:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:48.448 04:22:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:48.448 04:22:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:48.448 04:22:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:48.448 04:22:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.448 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:48.448 "name": "raid_bdev1", 00:27:48.448 "uuid": "4ddc86e5-d8e5-4009-bd94-96bb9aaf0c29", 00:27:48.448 "strip_size_kb": 0, 00:27:48.448 "state": "online", 00:27:48.448 "raid_level": "raid1", 00:27:48.448 "superblock": true, 00:27:48.448 "num_base_bdevs": 2, 00:27:48.448 "num_base_bdevs_discovered": 2, 00:27:48.448 "num_base_bdevs_operational": 2, 00:27:48.448 "base_bdevs_list": [ 00:27:48.448 { 00:27:48.448 "name": "spare", 00:27:48.448 "uuid": "d4c27400-b217-512d-af2d-a8bce5f3e876", 00:27:48.448 "is_configured": true, 00:27:48.448 "data_offset": 2048, 00:27:48.448 "data_size": 63488 00:27:48.448 }, 00:27:48.448 { 00:27:48.448 "name": "BaseBdev2", 00:27:48.448 "uuid": "1900467b-895a-5c13-b31d-2868ab26c9e0", 00:27:48.448 "is_configured": true, 00:27:48.448 "data_offset": 2048, 00:27:48.448 "data_size": 63488 00:27:48.448 } 00:27:48.448 ] 00:27:48.448 }' 00:27:48.448 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:48.448 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:48.448 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:48.707 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:48.707 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:27:48.708 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:48.708 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:48.708 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:48.708 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:48.708 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:48.708 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:48.708 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.708 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:48.708 "name": "raid_bdev1", 00:27:48.708 "uuid": "4ddc86e5-d8e5-4009-bd94-96bb9aaf0c29", 00:27:48.708 "strip_size_kb": 0, 00:27:48.708 "state": "online", 00:27:48.708 "raid_level": "raid1", 00:27:48.708 "superblock": true, 00:27:48.708 "num_base_bdevs": 2, 00:27:48.708 "num_base_bdevs_discovered": 2, 00:27:48.708 "num_base_bdevs_operational": 2, 00:27:48.708 "base_bdevs_list": [ 00:27:48.708 { 00:27:48.708 "name": "spare", 00:27:48.708 "uuid": "d4c27400-b217-512d-af2d-a8bce5f3e876", 00:27:48.708 "is_configured": true, 00:27:48.708 "data_offset": 2048, 00:27:48.708 "data_size": 63488 00:27:48.708 }, 00:27:48.708 { 00:27:48.708 "name": "BaseBdev2", 00:27:48.708 "uuid": "1900467b-895a-5c13-b31d-2868ab26c9e0", 00:27:48.708 "is_configured": true, 00:27:48.708 "data_offset": 2048, 00:27:48.708 "data_size": 63488 00:27:48.708 } 00:27:48.708 ] 00:27:48.708 }' 00:27:48.708 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:48.967 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:48.967 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:48.967 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:48.967 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:48.967 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:48.967 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:48.967 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:48.967 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:48.967 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:48.967 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:48.967 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:48.967 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:48.967 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:48.967 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:48.967 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:49.225 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:49.225 "name": "raid_bdev1", 00:27:49.225 "uuid": "4ddc86e5-d8e5-4009-bd94-96bb9aaf0c29", 00:27:49.225 "strip_size_kb": 0, 00:27:49.225 "state": "online", 00:27:49.225 "raid_level": "raid1", 00:27:49.225 "superblock": true, 00:27:49.225 "num_base_bdevs": 2, 00:27:49.225 "num_base_bdevs_discovered": 2, 00:27:49.225 "num_base_bdevs_operational": 2, 00:27:49.225 "base_bdevs_list": [ 00:27:49.225 { 00:27:49.225 "name": "spare", 00:27:49.225 "uuid": "d4c27400-b217-512d-af2d-a8bce5f3e876", 00:27:49.225 "is_configured": true, 00:27:49.225 "data_offset": 2048, 00:27:49.225 "data_size": 63488 00:27:49.225 }, 00:27:49.225 { 00:27:49.225 "name": "BaseBdev2", 00:27:49.225 "uuid": "1900467b-895a-5c13-b31d-2868ab26c9e0", 00:27:49.225 "is_configured": true, 00:27:49.225 "data_offset": 2048, 00:27:49.225 "data_size": 63488 00:27:49.225 } 00:27:49.225 ] 00:27:49.225 }' 00:27:49.225 04:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:49.225 04:22:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:49.793 04:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:50.052 [2024-07-23 04:22:58.581557] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:50.052 [2024-07-23 04:22:58.581594] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:50.052 [2024-07-23 04:22:58.581681] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:50.052 [2024-07-23 04:22:58.581758] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:50.052 [2024-07-23 04:22:58.581775] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:27:50.052 04:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.052 04:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:27:50.052 04:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:27:50.052 04:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:27:50.052 04:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:27:50.052 04:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:50.052 04:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:50.052 04:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:50.052 04:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:50.052 04:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:50.052 04:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:50.052 04:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:27:50.052 04:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:50.052 04:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:50.052 04:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:50.311 /dev/nbd0 00:27:50.311 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:50.311 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:50.311 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:50.311 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:27:50.311 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:50.311 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:50.311 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:50.311 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:27:50.311 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:50.311 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:50.311 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:50.311 1+0 records in 00:27:50.311 1+0 records out 00:27:50.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264283 s, 15.5 MB/s 00:27:50.311 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:50.311 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:27:50.311 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:50.311 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:50.311 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:27:50.311 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:50.311 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:50.311 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:50.570 /dev/nbd1 00:27:50.570 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:50.570 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:50.570 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:27:50.570 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:27:50.570 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:50.570 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:50.570 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:27:50.570 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:27:50.570 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:50.570 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:50.570 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:50.570 1+0 records in 00:27:50.570 1+0 records out 00:27:50.570 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334501 s, 12.2 MB/s 00:27:50.570 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:50.829 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:27:50.829 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:50.829 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:50.829 04:22:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:27:50.829 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:50.829 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:50.829 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:50.829 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:50.829 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:50.829 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:50.829 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:50.829 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:27:50.829 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:50.829 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:51.088 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:51.088 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:51.088 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:51.088 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:51.088 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:51.088 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:51.088 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:51.088 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:51.088 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:51.088 04:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:51.347 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:51.347 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:51.347 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:51.347 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:51.347 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:51.347 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:51.347 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:51.347 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:51.347 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:27:51.347 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:51.606 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:51.864 [2024-07-23 04:23:00.482198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:51.864 [2024-07-23 04:23:00.482252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:51.864 [2024-07-23 04:23:00.482281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043280 00:27:51.864 [2024-07-23 04:23:00.482296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:51.864 [2024-07-23 04:23:00.485055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:51.864 [2024-07-23 04:23:00.485089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:51.864 [2024-07-23 04:23:00.485199] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:51.864 [2024-07-23 04:23:00.485269] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:51.864 [2024-07-23 04:23:00.485454] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:51.864 spare 00:27:51.864 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:51.864 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:51.864 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:51.864 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:51.864 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:51.864 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:51.864 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:51.864 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:51.864 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:51.864 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:51.864 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:51.864 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:51.864 [2024-07-23 04:23:00.585791] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043880 00:27:51.864 [2024-07-23 04:23:00.585822] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:51.864 [2024-07-23 04:23:00.586152] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc9320 00:27:51.864 [2024-07-23 04:23:00.586410] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043880 00:27:51.864 [2024-07-23 04:23:00.586427] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043880 00:27:51.864 [2024-07-23 04:23:00.586616] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:52.123 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:52.123 "name": "raid_bdev1", 00:27:52.123 "uuid": "4ddc86e5-d8e5-4009-bd94-96bb9aaf0c29", 00:27:52.123 "strip_size_kb": 0, 00:27:52.123 "state": "online", 00:27:52.123 "raid_level": "raid1", 00:27:52.123 "superblock": true, 00:27:52.123 "num_base_bdevs": 2, 00:27:52.123 "num_base_bdevs_discovered": 2, 00:27:52.123 "num_base_bdevs_operational": 2, 00:27:52.123 "base_bdevs_list": [ 00:27:52.123 { 00:27:52.123 "name": "spare", 00:27:52.123 "uuid": "d4c27400-b217-512d-af2d-a8bce5f3e876", 00:27:52.123 "is_configured": true, 00:27:52.123 "data_offset": 2048, 00:27:52.123 "data_size": 63488 00:27:52.123 }, 00:27:52.123 { 00:27:52.123 "name": "BaseBdev2", 00:27:52.123 "uuid": "1900467b-895a-5c13-b31d-2868ab26c9e0", 00:27:52.123 "is_configured": true, 00:27:52.123 "data_offset": 2048, 00:27:52.123 "data_size": 63488 00:27:52.123 } 00:27:52.123 ] 00:27:52.123 }' 00:27:52.123 04:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:52.123 04:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:52.691 04:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:52.691 04:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:52.691 04:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:52.691 04:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:52.691 04:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:52.691 04:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:52.691 04:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:52.950 04:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:52.950 "name": "raid_bdev1", 00:27:52.950 "uuid": "4ddc86e5-d8e5-4009-bd94-96bb9aaf0c29", 00:27:52.950 "strip_size_kb": 0, 00:27:52.950 "state": "online", 00:27:52.950 "raid_level": "raid1", 00:27:52.950 "superblock": true, 00:27:52.950 "num_base_bdevs": 2, 00:27:52.950 "num_base_bdevs_discovered": 2, 00:27:52.950 "num_base_bdevs_operational": 2, 00:27:52.950 "base_bdevs_list": [ 00:27:52.950 { 00:27:52.950 "name": "spare", 00:27:52.950 "uuid": "d4c27400-b217-512d-af2d-a8bce5f3e876", 00:27:52.950 "is_configured": true, 00:27:52.950 "data_offset": 2048, 00:27:52.950 "data_size": 63488 00:27:52.950 }, 00:27:52.950 { 00:27:52.950 "name": "BaseBdev2", 00:27:52.950 "uuid": "1900467b-895a-5c13-b31d-2868ab26c9e0", 00:27:52.950 "is_configured": true, 00:27:52.950 "data_offset": 2048, 00:27:52.950 "data_size": 63488 00:27:52.950 } 00:27:52.950 ] 00:27:52.950 }' 00:27:52.950 04:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:52.950 04:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:52.950 04:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:52.950 04:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:52.950 04:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:52.950 04:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.209 04:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:27:53.209 04:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:53.467 [2024-07-23 04:23:02.054773] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:53.467 04:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:53.467 04:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:53.467 04:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:53.467 04:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:53.467 04:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:53.467 04:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:53.467 04:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:53.467 04:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:53.467 04:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:53.467 04:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:53.467 04:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.467 04:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:53.726 04:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:53.726 "name": "raid_bdev1", 00:27:53.726 "uuid": "4ddc86e5-d8e5-4009-bd94-96bb9aaf0c29", 00:27:53.726 "strip_size_kb": 0, 00:27:53.726 "state": "online", 00:27:53.726 "raid_level": "raid1", 00:27:53.726 "superblock": true, 00:27:53.726 "num_base_bdevs": 2, 00:27:53.726 "num_base_bdevs_discovered": 1, 00:27:53.726 "num_base_bdevs_operational": 1, 00:27:53.726 "base_bdevs_list": [ 00:27:53.726 { 00:27:53.726 "name": null, 00:27:53.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:53.726 "is_configured": false, 00:27:53.726 "data_offset": 2048, 00:27:53.726 "data_size": 63488 00:27:53.726 }, 00:27:53.726 { 00:27:53.726 "name": "BaseBdev2", 00:27:53.726 "uuid": "1900467b-895a-5c13-b31d-2868ab26c9e0", 00:27:53.726 "is_configured": true, 00:27:53.726 "data_offset": 2048, 00:27:53.726 "data_size": 63488 00:27:53.726 } 00:27:53.726 ] 00:27:53.726 }' 00:27:53.726 04:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:53.726 04:23:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:54.294 04:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:54.294 [2024-07-23 04:23:03.069559] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:54.294 [2024-07-23 04:23:03.069758] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:54.294 [2024-07-23 04:23:03.069785] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:54.294 [2024-07-23 04:23:03.069824] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:54.552 [2024-07-23 04:23:03.095089] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc93f0 00:27:54.552 [2024-07-23 04:23:03.097422] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:54.552 04:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:27:55.487 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:55.487 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:55.487 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:55.487 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:55.487 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:55.487 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.487 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:55.746 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:55.746 "name": "raid_bdev1", 00:27:55.746 "uuid": "4ddc86e5-d8e5-4009-bd94-96bb9aaf0c29", 00:27:55.746 "strip_size_kb": 0, 00:27:55.746 "state": "online", 00:27:55.746 "raid_level": "raid1", 00:27:55.746 "superblock": true, 00:27:55.746 "num_base_bdevs": 2, 00:27:55.746 "num_base_bdevs_discovered": 2, 00:27:55.746 "num_base_bdevs_operational": 2, 00:27:55.746 "process": { 00:27:55.746 "type": "rebuild", 00:27:55.746 "target": "spare", 00:27:55.746 "progress": { 00:27:55.746 "blocks": 24576, 00:27:55.746 "percent": 38 00:27:55.746 } 00:27:55.746 }, 00:27:55.746 "base_bdevs_list": [ 00:27:55.746 { 00:27:55.746 "name": "spare", 00:27:55.746 "uuid": "d4c27400-b217-512d-af2d-a8bce5f3e876", 00:27:55.746 "is_configured": true, 00:27:55.746 "data_offset": 2048, 00:27:55.746 "data_size": 63488 00:27:55.746 }, 00:27:55.746 { 00:27:55.746 "name": "BaseBdev2", 00:27:55.746 "uuid": "1900467b-895a-5c13-b31d-2868ab26c9e0", 00:27:55.746 "is_configured": true, 00:27:55.746 "data_offset": 2048, 00:27:55.746 "data_size": 63488 00:27:55.746 } 00:27:55.746 ] 00:27:55.746 }' 00:27:55.746 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:55.746 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:55.746 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:55.746 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:55.746 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:56.006 [2024-07-23 04:23:04.646408] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:56.006 [2024-07-23 04:23:04.710350] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:56.006 [2024-07-23 04:23:04.710416] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:56.006 [2024-07-23 04:23:04.710437] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:56.006 [2024-07-23 04:23:04.710452] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:56.006 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:56.006 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:56.006 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:56.006 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:56.006 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:56.006 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:56.006 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:56.006 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:56.006 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:56.006 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:56.006 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.006 04:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:56.277 04:23:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:56.277 "name": "raid_bdev1", 00:27:56.277 "uuid": "4ddc86e5-d8e5-4009-bd94-96bb9aaf0c29", 00:27:56.277 "strip_size_kb": 0, 00:27:56.277 "state": "online", 00:27:56.277 "raid_level": "raid1", 00:27:56.277 "superblock": true, 00:27:56.277 "num_base_bdevs": 2, 00:27:56.277 "num_base_bdevs_discovered": 1, 00:27:56.277 "num_base_bdevs_operational": 1, 00:27:56.277 "base_bdevs_list": [ 00:27:56.277 { 00:27:56.277 "name": null, 00:27:56.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:56.277 "is_configured": false, 00:27:56.277 "data_offset": 2048, 00:27:56.277 "data_size": 63488 00:27:56.277 }, 00:27:56.277 { 00:27:56.277 "name": "BaseBdev2", 00:27:56.277 "uuid": "1900467b-895a-5c13-b31d-2868ab26c9e0", 00:27:56.277 "is_configured": true, 00:27:56.277 "data_offset": 2048, 00:27:56.277 "data_size": 63488 00:27:56.277 } 00:27:56.277 ] 00:27:56.277 }' 00:27:56.277 04:23:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:56.277 04:23:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:56.868 04:23:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:57.127 [2024-07-23 04:23:05.731922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:57.127 [2024-07-23 04:23:05.731990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:57.127 [2024-07-23 04:23:05.732017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:27:57.127 [2024-07-23 04:23:05.732035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:57.127 [2024-07-23 04:23:05.732645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:57.127 [2024-07-23 04:23:05.732674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:57.127 [2024-07-23 04:23:05.732781] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:57.127 [2024-07-23 04:23:05.732802] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:57.127 [2024-07-23 04:23:05.732818] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:57.128 [2024-07-23 04:23:05.732855] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:57.128 [2024-07-23 04:23:05.758182] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc94c0 00:27:57.128 spare 00:27:57.128 [2024-07-23 04:23:05.760475] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:57.128 04:23:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:27:58.065 04:23:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:58.065 04:23:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:58.065 04:23:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:58.065 04:23:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:58.065 04:23:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:58.065 04:23:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:58.065 04:23:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.324 04:23:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:58.324 "name": "raid_bdev1", 00:27:58.324 "uuid": "4ddc86e5-d8e5-4009-bd94-96bb9aaf0c29", 00:27:58.324 "strip_size_kb": 0, 00:27:58.324 "state": "online", 00:27:58.324 "raid_level": "raid1", 00:27:58.324 "superblock": true, 00:27:58.324 "num_base_bdevs": 2, 00:27:58.324 "num_base_bdevs_discovered": 2, 00:27:58.324 "num_base_bdevs_operational": 2, 00:27:58.324 "process": { 00:27:58.324 "type": "rebuild", 00:27:58.324 "target": "spare", 00:27:58.324 "progress": { 00:27:58.324 "blocks": 24576, 00:27:58.324 "percent": 38 00:27:58.324 } 00:27:58.324 }, 00:27:58.324 "base_bdevs_list": [ 00:27:58.324 { 00:27:58.324 "name": "spare", 00:27:58.324 "uuid": "d4c27400-b217-512d-af2d-a8bce5f3e876", 00:27:58.324 "is_configured": true, 00:27:58.324 "data_offset": 2048, 00:27:58.324 "data_size": 63488 00:27:58.324 }, 00:27:58.324 { 00:27:58.324 "name": "BaseBdev2", 00:27:58.324 "uuid": "1900467b-895a-5c13-b31d-2868ab26c9e0", 00:27:58.324 "is_configured": true, 00:27:58.324 "data_offset": 2048, 00:27:58.324 "data_size": 63488 00:27:58.324 } 00:27:58.324 ] 00:27:58.324 }' 00:27:58.324 04:23:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:58.324 04:23:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:58.324 04:23:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:58.324 04:23:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:58.324 04:23:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:58.583 [2024-07-23 04:23:07.289796] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:58.842 [2024-07-23 04:23:07.373412] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:58.842 [2024-07-23 04:23:07.373471] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:58.842 [2024-07-23 04:23:07.373496] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:58.842 [2024-07-23 04:23:07.373512] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:58.842 04:23:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:58.842 04:23:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:58.842 04:23:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:58.842 04:23:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:58.842 04:23:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:58.842 04:23:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:58.842 04:23:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:58.842 04:23:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:58.842 04:23:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:58.843 04:23:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:58.843 04:23:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:58.843 04:23:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:59.102 04:23:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:59.102 "name": "raid_bdev1", 00:27:59.102 "uuid": "4ddc86e5-d8e5-4009-bd94-96bb9aaf0c29", 00:27:59.102 "strip_size_kb": 0, 00:27:59.102 "state": "online", 00:27:59.102 "raid_level": "raid1", 00:27:59.102 "superblock": true, 00:27:59.102 "num_base_bdevs": 2, 00:27:59.102 "num_base_bdevs_discovered": 1, 00:27:59.102 "num_base_bdevs_operational": 1, 00:27:59.102 "base_bdevs_list": [ 00:27:59.102 { 00:27:59.102 "name": null, 00:27:59.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.102 "is_configured": false, 00:27:59.102 "data_offset": 2048, 00:27:59.102 "data_size": 63488 00:27:59.102 }, 00:27:59.102 { 00:27:59.102 "name": "BaseBdev2", 00:27:59.102 "uuid": "1900467b-895a-5c13-b31d-2868ab26c9e0", 00:27:59.102 "is_configured": true, 00:27:59.102 "data_offset": 2048, 00:27:59.102 "data_size": 63488 00:27:59.102 } 00:27:59.102 ] 00:27:59.102 }' 00:27:59.102 04:23:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:59.102 04:23:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:59.670 04:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:59.670 04:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:59.670 04:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:59.670 04:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:59.670 04:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:59.670 04:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:59.670 04:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:59.929 04:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:59.929 "name": "raid_bdev1", 00:27:59.929 "uuid": "4ddc86e5-d8e5-4009-bd94-96bb9aaf0c29", 00:27:59.929 "strip_size_kb": 0, 00:27:59.929 "state": "online", 00:27:59.929 "raid_level": "raid1", 00:27:59.929 "superblock": true, 00:27:59.929 "num_base_bdevs": 2, 00:27:59.929 "num_base_bdevs_discovered": 1, 00:27:59.929 "num_base_bdevs_operational": 1, 00:27:59.929 "base_bdevs_list": [ 00:27:59.929 { 00:27:59.929 "name": null, 00:27:59.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.929 "is_configured": false, 00:27:59.929 "data_offset": 2048, 00:27:59.929 "data_size": 63488 00:27:59.929 }, 00:27:59.929 { 00:27:59.929 "name": "BaseBdev2", 00:27:59.929 "uuid": "1900467b-895a-5c13-b31d-2868ab26c9e0", 00:27:59.929 "is_configured": true, 00:27:59.929 "data_offset": 2048, 00:27:59.929 "data_size": 63488 00:27:59.929 } 00:27:59.929 ] 00:27:59.929 }' 00:27:59.929 04:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:59.929 04:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:59.929 04:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:59.929 04:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:59.929 04:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:00.188 04:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:00.757 [2024-07-23 04:23:09.245544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:00.757 [2024-07-23 04:23:09.245605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:00.757 [2024-07-23 04:23:09.245633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044480 00:28:00.757 [2024-07-23 04:23:09.245649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:00.757 [2024-07-23 04:23:09.246234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:00.757 [2024-07-23 04:23:09.246260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:00.757 [2024-07-23 04:23:09.246360] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:00.757 [2024-07-23 04:23:09.246379] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:00.757 [2024-07-23 04:23:09.246395] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:00.757 BaseBdev1 00:28:00.757 04:23:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:28:01.693 04:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:01.693 04:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:01.693 04:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:01.693 04:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:01.693 04:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:01.693 04:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:01.693 04:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:01.693 04:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:01.693 04:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:01.693 04:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:01.693 04:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:01.693 04:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:01.952 04:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:01.953 "name": "raid_bdev1", 00:28:01.953 "uuid": "4ddc86e5-d8e5-4009-bd94-96bb9aaf0c29", 00:28:01.953 "strip_size_kb": 0, 00:28:01.953 "state": "online", 00:28:01.953 "raid_level": "raid1", 00:28:01.953 "superblock": true, 00:28:01.953 "num_base_bdevs": 2, 00:28:01.953 "num_base_bdevs_discovered": 1, 00:28:01.953 "num_base_bdevs_operational": 1, 00:28:01.953 "base_bdevs_list": [ 00:28:01.953 { 00:28:01.953 "name": null, 00:28:01.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.953 "is_configured": false, 00:28:01.953 "data_offset": 2048, 00:28:01.953 "data_size": 63488 00:28:01.953 }, 00:28:01.953 { 00:28:01.953 "name": "BaseBdev2", 00:28:01.953 "uuid": "1900467b-895a-5c13-b31d-2868ab26c9e0", 00:28:01.953 "is_configured": true, 00:28:01.953 "data_offset": 2048, 00:28:01.953 "data_size": 63488 00:28:01.953 } 00:28:01.953 ] 00:28:01.953 }' 00:28:01.953 04:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:01.953 04:23:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.520 04:23:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:02.520 04:23:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:02.520 04:23:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:02.520 04:23:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:02.520 04:23:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:02.520 04:23:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.520 04:23:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:02.779 04:23:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:02.779 "name": "raid_bdev1", 00:28:02.779 "uuid": "4ddc86e5-d8e5-4009-bd94-96bb9aaf0c29", 00:28:02.779 "strip_size_kb": 0, 00:28:02.779 "state": "online", 00:28:02.779 "raid_level": "raid1", 00:28:02.779 "superblock": true, 00:28:02.779 "num_base_bdevs": 2, 00:28:02.779 "num_base_bdevs_discovered": 1, 00:28:02.779 "num_base_bdevs_operational": 1, 00:28:02.779 "base_bdevs_list": [ 00:28:02.779 { 00:28:02.779 "name": null, 00:28:02.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.779 "is_configured": false, 00:28:02.779 "data_offset": 2048, 00:28:02.779 "data_size": 63488 00:28:02.779 }, 00:28:02.779 { 00:28:02.779 "name": "BaseBdev2", 00:28:02.779 "uuid": "1900467b-895a-5c13-b31d-2868ab26c9e0", 00:28:02.779 "is_configured": true, 00:28:02.779 "data_offset": 2048, 00:28:02.779 "data_size": 63488 00:28:02.779 } 00:28:02.779 ] 00:28:02.779 }' 00:28:02.779 04:23:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:02.779 04:23:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:02.779 04:23:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:02.779 04:23:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:02.779 04:23:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:02.779 04:23:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:28:02.779 04:23:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:02.779 04:23:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:02.779 04:23:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:02.779 04:23:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:02.779 04:23:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:02.779 04:23:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:02.779 04:23:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:02.779 04:23:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:02.779 04:23:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:28:02.779 04:23:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:03.038 [2024-07-23 04:23:11.628001] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:03.038 [2024-07-23 04:23:11.628177] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:03.038 [2024-07-23 04:23:11.628198] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:03.038 request: 00:28:03.038 { 00:28:03.038 "base_bdev": "BaseBdev1", 00:28:03.038 "raid_bdev": "raid_bdev1", 00:28:03.038 "method": "bdev_raid_add_base_bdev", 00:28:03.038 "req_id": 1 00:28:03.038 } 00:28:03.038 Got JSON-RPC error response 00:28:03.038 response: 00:28:03.038 { 00:28:03.038 "code": -22, 00:28:03.038 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:03.038 } 00:28:03.038 04:23:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:28:03.038 04:23:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:03.038 04:23:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:03.038 04:23:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:03.038 04:23:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:28:03.976 04:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:03.976 04:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:03.976 04:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:03.976 04:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:03.976 04:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:03.976 04:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:03.976 04:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:03.976 04:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:03.976 04:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:03.976 04:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:03.976 04:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.976 04:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:04.236 04:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:04.236 "name": "raid_bdev1", 00:28:04.236 "uuid": "4ddc86e5-d8e5-4009-bd94-96bb9aaf0c29", 00:28:04.236 "strip_size_kb": 0, 00:28:04.236 "state": "online", 00:28:04.236 "raid_level": "raid1", 00:28:04.236 "superblock": true, 00:28:04.236 "num_base_bdevs": 2, 00:28:04.236 "num_base_bdevs_discovered": 1, 00:28:04.236 "num_base_bdevs_operational": 1, 00:28:04.236 "base_bdevs_list": [ 00:28:04.236 { 00:28:04.236 "name": null, 00:28:04.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:04.236 "is_configured": false, 00:28:04.236 "data_offset": 2048, 00:28:04.236 "data_size": 63488 00:28:04.236 }, 00:28:04.236 { 00:28:04.236 "name": "BaseBdev2", 00:28:04.236 "uuid": "1900467b-895a-5c13-b31d-2868ab26c9e0", 00:28:04.236 "is_configured": true, 00:28:04.236 "data_offset": 2048, 00:28:04.236 "data_size": 63488 00:28:04.236 } 00:28:04.236 ] 00:28:04.236 }' 00:28:04.237 04:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:04.237 04:23:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.804 04:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:04.804 04:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:04.804 04:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:04.804 04:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:04.804 04:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:04.804 04:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.804 04:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:05.063 04:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:05.063 "name": "raid_bdev1", 00:28:05.063 "uuid": "4ddc86e5-d8e5-4009-bd94-96bb9aaf0c29", 00:28:05.063 "strip_size_kb": 0, 00:28:05.063 "state": "online", 00:28:05.063 "raid_level": "raid1", 00:28:05.063 "superblock": true, 00:28:05.063 "num_base_bdevs": 2, 00:28:05.063 "num_base_bdevs_discovered": 1, 00:28:05.063 "num_base_bdevs_operational": 1, 00:28:05.063 "base_bdevs_list": [ 00:28:05.063 { 00:28:05.063 "name": null, 00:28:05.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.063 "is_configured": false, 00:28:05.063 "data_offset": 2048, 00:28:05.063 "data_size": 63488 00:28:05.063 }, 00:28:05.063 { 00:28:05.063 "name": "BaseBdev2", 00:28:05.063 "uuid": "1900467b-895a-5c13-b31d-2868ab26c9e0", 00:28:05.063 "is_configured": true, 00:28:05.063 "data_offset": 2048, 00:28:05.064 "data_size": 63488 00:28:05.064 } 00:28:05.064 ] 00:28:05.064 }' 00:28:05.064 04:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:05.064 04:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:05.064 04:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:05.064 04:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:05.064 04:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 2763671 00:28:05.064 04:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2763671 ']' 00:28:05.064 04:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 2763671 00:28:05.064 04:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:28:05.064 04:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:05.064 04:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2763671 00:28:05.064 04:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:05.064 04:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:05.064 04:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2763671' 00:28:05.064 killing process with pid 2763671 00:28:05.064 04:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 2763671 00:28:05.064 Received shutdown signal, test time was about 60.000000 seconds 00:28:05.064 00:28:05.064 Latency(us) 00:28:05.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.064 =================================================================================================================== 00:28:05.064 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:05.064 [2024-07-23 04:23:13.824090] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:05.064 [2024-07-23 04:23:13.824228] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:05.064 04:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 2763671 00:28:05.064 [2024-07-23 04:23:13.824292] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:05.064 [2024-07-23 04:23:13.824309] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043880 name raid_bdev1, state offline 00:28:05.631 [2024-07-23 04:23:14.165322] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:28:07.536 00:28:07.536 real 0m37.212s 00:28:07.536 user 0m52.623s 00:28:07.536 sys 0m6.383s 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.536 ************************************ 00:28:07.536 END TEST raid_rebuild_test_sb 00:28:07.536 ************************************ 00:28:07.536 04:23:15 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:28:07.536 04:23:15 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:28:07.536 04:23:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:28:07.536 04:23:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:07.536 04:23:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:07.536 ************************************ 00:28:07.536 START TEST raid_rebuild_test_io 00:28:07.536 ************************************ 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false true true 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2770224 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2770224 /var/tmp/spdk-raid.sock 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 2770224 ']' 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:07.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:07.536 04:23:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:07.536 [2024-07-23 04:23:16.043553] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:28:07.536 [2024-07-23 04:23:16.043676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2770224 ] 00:28:07.536 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:07.536 Zero copy mechanism will not be used. 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.536 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.537 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:07.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:07.537 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:07.537 [2024-07-23 04:23:16.269083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.796 [2024-07-23 04:23:16.545127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.364 [2024-07-23 04:23:16.882202] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:08.364 [2024-07-23 04:23:16.882252] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:08.364 04:23:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:08.364 04:23:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:28:08.364 04:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:08.364 04:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:08.623 BaseBdev1_malloc 00:28:08.623 04:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:08.882 [2024-07-23 04:23:17.546510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:08.882 [2024-07-23 04:23:17.546578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:08.882 [2024-07-23 04:23:17.546607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:28:08.882 [2024-07-23 04:23:17.546625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:08.882 [2024-07-23 04:23:17.549412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:08.882 [2024-07-23 04:23:17.549452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:08.882 BaseBdev1 00:28:08.882 04:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:08.882 04:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:09.141 BaseBdev2_malloc 00:28:09.141 04:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:09.402 [2024-07-23 04:23:18.046520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:09.402 [2024-07-23 04:23:18.046580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:09.402 [2024-07-23 04:23:18.046608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:28:09.402 [2024-07-23 04:23:18.046629] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:09.402 [2024-07-23 04:23:18.049378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:09.402 [2024-07-23 04:23:18.049416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:09.402 BaseBdev2 00:28:09.402 04:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:09.661 spare_malloc 00:28:09.661 04:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:09.919 spare_delay 00:28:09.919 04:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:10.178 [2024-07-23 04:23:18.796694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:10.178 [2024-07-23 04:23:18.796751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:10.178 [2024-07-23 04:23:18.796777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:28:10.178 [2024-07-23 04:23:18.796795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:10.178 [2024-07-23 04:23:18.799585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:10.178 [2024-07-23 04:23:18.799624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:10.178 spare 00:28:10.178 04:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:10.437 [2024-07-23 04:23:19.021374] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:10.437 [2024-07-23 04:23:19.023743] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:10.438 [2024-07-23 04:23:19.023850] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:28:10.438 [2024-07-23 04:23:19.023870] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:10.438 [2024-07-23 04:23:19.024254] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:28:10.438 [2024-07-23 04:23:19.024525] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:28:10.438 [2024-07-23 04:23:19.024543] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:28:10.438 [2024-07-23 04:23:19.024788] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:10.438 04:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:10.438 04:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:10.438 04:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:10.438 04:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:10.438 04:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:10.438 04:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:10.438 04:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:10.438 04:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:10.438 04:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:10.438 04:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:10.438 04:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:10.438 04:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:10.697 04:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:10.697 "name": "raid_bdev1", 00:28:10.697 "uuid": "0dd65018-780f-4c35-a5db-df6a9112dd06", 00:28:10.697 "strip_size_kb": 0, 00:28:10.697 "state": "online", 00:28:10.697 "raid_level": "raid1", 00:28:10.697 "superblock": false, 00:28:10.697 "num_base_bdevs": 2, 00:28:10.697 "num_base_bdevs_discovered": 2, 00:28:10.697 "num_base_bdevs_operational": 2, 00:28:10.697 "base_bdevs_list": [ 00:28:10.697 { 00:28:10.697 "name": "BaseBdev1", 00:28:10.697 "uuid": "81a7c256-58af-5858-a5a3-c1fd4cbbcb24", 00:28:10.697 "is_configured": true, 00:28:10.697 "data_offset": 0, 00:28:10.697 "data_size": 65536 00:28:10.697 }, 00:28:10.697 { 00:28:10.697 "name": "BaseBdev2", 00:28:10.697 "uuid": "9dcd549d-1a15-59c7-b7e8-60dc05d5b4ed", 00:28:10.697 "is_configured": true, 00:28:10.697 "data_offset": 0, 00:28:10.697 "data_size": 65536 00:28:10.697 } 00:28:10.697 ] 00:28:10.697 }' 00:28:10.697 04:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:10.697 04:23:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:11.265 04:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:28:11.265 04:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:11.265 [2024-07-23 04:23:20.016404] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:11.265 04:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:28:11.265 04:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:11.265 04:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:11.525 04:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:28:11.525 04:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:28:11.525 04:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:11.525 04:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:28:11.784 [2024-07-23 04:23:20.377583] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:28:11.784 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:11.784 Zero copy mechanism will not be used. 00:28:11.784 Running I/O for 60 seconds... 00:28:11.784 [2024-07-23 04:23:20.470458] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:11.784 [2024-07-23 04:23:20.478325] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000108b0 00:28:11.784 04:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:11.784 04:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:11.784 04:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:11.784 04:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:11.784 04:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:11.784 04:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:11.784 04:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:11.784 04:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:11.784 04:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:11.784 04:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:11.785 04:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:11.785 04:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:12.044 04:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:12.044 "name": "raid_bdev1", 00:28:12.044 "uuid": "0dd65018-780f-4c35-a5db-df6a9112dd06", 00:28:12.044 "strip_size_kb": 0, 00:28:12.044 "state": "online", 00:28:12.044 "raid_level": "raid1", 00:28:12.044 "superblock": false, 00:28:12.044 "num_base_bdevs": 2, 00:28:12.044 "num_base_bdevs_discovered": 1, 00:28:12.044 "num_base_bdevs_operational": 1, 00:28:12.044 "base_bdevs_list": [ 00:28:12.044 { 00:28:12.044 "name": null, 00:28:12.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:12.044 "is_configured": false, 00:28:12.044 "data_offset": 0, 00:28:12.044 "data_size": 65536 00:28:12.044 }, 00:28:12.044 { 00:28:12.044 "name": "BaseBdev2", 00:28:12.044 "uuid": "9dcd549d-1a15-59c7-b7e8-60dc05d5b4ed", 00:28:12.044 "is_configured": true, 00:28:12.044 "data_offset": 0, 00:28:12.044 "data_size": 65536 00:28:12.044 } 00:28:12.044 ] 00:28:12.044 }' 00:28:12.044 04:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:12.044 04:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:12.612 04:23:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:12.872 [2024-07-23 04:23:21.560819] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:12.872 [2024-07-23 04:23:21.641099] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:28:12.872 [2024-07-23 04:23:21.643490] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:12.872 04:23:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:28:13.131 [2024-07-23 04:23:21.769974] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:13.390 [2024-07-23 04:23:22.005639] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:13.390 [2024-07-23 04:23:22.005930] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:13.649 [2024-07-23 04:23:22.276034] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:13.649 [2024-07-23 04:23:22.276455] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:13.909 [2024-07-23 04:23:22.504745] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:13.909 04:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:13.909 04:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:13.909 04:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:13.909 04:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:13.909 04:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:13.909 04:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.909 04:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:14.168 [2024-07-23 04:23:22.842149] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:14.168 04:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:14.168 "name": "raid_bdev1", 00:28:14.168 "uuid": "0dd65018-780f-4c35-a5db-df6a9112dd06", 00:28:14.168 "strip_size_kb": 0, 00:28:14.168 "state": "online", 00:28:14.168 "raid_level": "raid1", 00:28:14.168 "superblock": false, 00:28:14.168 "num_base_bdevs": 2, 00:28:14.168 "num_base_bdevs_discovered": 2, 00:28:14.168 "num_base_bdevs_operational": 2, 00:28:14.168 "process": { 00:28:14.168 "type": "rebuild", 00:28:14.168 "target": "spare", 00:28:14.168 "progress": { 00:28:14.168 "blocks": 14336, 00:28:14.168 "percent": 21 00:28:14.168 } 00:28:14.168 }, 00:28:14.168 "base_bdevs_list": [ 00:28:14.168 { 00:28:14.168 "name": "spare", 00:28:14.168 "uuid": "41933a2c-362f-5405-a703-489623f3bc1d", 00:28:14.168 "is_configured": true, 00:28:14.168 "data_offset": 0, 00:28:14.168 "data_size": 65536 00:28:14.168 }, 00:28:14.168 { 00:28:14.168 "name": "BaseBdev2", 00:28:14.168 "uuid": "9dcd549d-1a15-59c7-b7e8-60dc05d5b4ed", 00:28:14.168 "is_configured": true, 00:28:14.168 "data_offset": 0, 00:28:14.168 "data_size": 65536 00:28:14.168 } 00:28:14.168 ] 00:28:14.168 }' 00:28:14.169 04:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:14.169 04:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:14.169 04:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:14.428 04:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:14.428 04:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:14.428 [2024-07-23 04:23:23.070632] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:14.428 [2024-07-23 04:23:23.070823] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:14.428 [2024-07-23 04:23:23.126455] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:14.428 [2024-07-23 04:23:23.189560] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:14.688 [2024-07-23 04:23:23.298623] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:14.688 [2024-07-23 04:23:23.308024] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:14.688 [2024-07-23 04:23:23.308062] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:14.688 [2024-07-23 04:23:23.308077] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:14.688 [2024-07-23 04:23:23.364459] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000108b0 00:28:14.688 04:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:14.688 04:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:14.688 04:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:14.688 04:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:14.688 04:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:14.688 04:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:14.688 04:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:14.688 04:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:14.688 04:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:14.688 04:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:14.688 04:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:14.688 04:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:14.947 04:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:14.947 "name": "raid_bdev1", 00:28:14.947 "uuid": "0dd65018-780f-4c35-a5db-df6a9112dd06", 00:28:14.947 "strip_size_kb": 0, 00:28:14.947 "state": "online", 00:28:14.947 "raid_level": "raid1", 00:28:14.947 "superblock": false, 00:28:14.947 "num_base_bdevs": 2, 00:28:14.947 "num_base_bdevs_discovered": 1, 00:28:14.947 "num_base_bdevs_operational": 1, 00:28:14.947 "base_bdevs_list": [ 00:28:14.947 { 00:28:14.947 "name": null, 00:28:14.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:14.947 "is_configured": false, 00:28:14.947 "data_offset": 0, 00:28:14.947 "data_size": 65536 00:28:14.947 }, 00:28:14.947 { 00:28:14.947 "name": "BaseBdev2", 00:28:14.947 "uuid": "9dcd549d-1a15-59c7-b7e8-60dc05d5b4ed", 00:28:14.947 "is_configured": true, 00:28:14.947 "data_offset": 0, 00:28:14.947 "data_size": 65536 00:28:14.947 } 00:28:14.947 ] 00:28:14.947 }' 00:28:14.947 04:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:14.947 04:23:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:15.515 04:23:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:15.515 04:23:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:15.515 04:23:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:15.515 04:23:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:15.515 04:23:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:15.515 04:23:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:15.515 04:23:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.774 04:23:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:15.774 "name": "raid_bdev1", 00:28:15.774 "uuid": "0dd65018-780f-4c35-a5db-df6a9112dd06", 00:28:15.774 "strip_size_kb": 0, 00:28:15.774 "state": "online", 00:28:15.774 "raid_level": "raid1", 00:28:15.774 "superblock": false, 00:28:15.774 "num_base_bdevs": 2, 00:28:15.774 "num_base_bdevs_discovered": 1, 00:28:15.774 "num_base_bdevs_operational": 1, 00:28:15.774 "base_bdevs_list": [ 00:28:15.774 { 00:28:15.774 "name": null, 00:28:15.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:15.774 "is_configured": false, 00:28:15.774 "data_offset": 0, 00:28:15.774 "data_size": 65536 00:28:15.774 }, 00:28:15.774 { 00:28:15.774 "name": "BaseBdev2", 00:28:15.774 "uuid": "9dcd549d-1a15-59c7-b7e8-60dc05d5b4ed", 00:28:15.774 "is_configured": true, 00:28:15.774 "data_offset": 0, 00:28:15.774 "data_size": 65536 00:28:15.774 } 00:28:15.774 ] 00:28:15.774 }' 00:28:15.774 04:23:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:15.774 04:23:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:15.774 04:23:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:15.775 04:23:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:15.775 04:23:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:16.034 [2024-07-23 04:23:24.690043] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:16.034 04:23:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:16.034 [2024-07-23 04:23:24.757395] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010a50 00:28:16.034 [2024-07-23 04:23:24.759717] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:16.294 [2024-07-23 04:23:24.885578] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:16.294 [2024-07-23 04:23:24.886017] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:16.553 [2024-07-23 04:23:25.113294] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:16.553 [2024-07-23 04:23:25.113546] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:16.812 [2024-07-23 04:23:25.451008] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:16.812 [2024-07-23 04:23:25.451464] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:17.071 [2024-07-23 04:23:25.670380] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:17.071 04:23:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:17.071 04:23:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:17.071 04:23:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:17.071 04:23:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:17.071 04:23:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:17.071 04:23:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:17.071 04:23:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:17.331 04:23:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:17.331 "name": "raid_bdev1", 00:28:17.331 "uuid": "0dd65018-780f-4c35-a5db-df6a9112dd06", 00:28:17.331 "strip_size_kb": 0, 00:28:17.331 "state": "online", 00:28:17.331 "raid_level": "raid1", 00:28:17.331 "superblock": false, 00:28:17.331 "num_base_bdevs": 2, 00:28:17.331 "num_base_bdevs_discovered": 2, 00:28:17.331 "num_base_bdevs_operational": 2, 00:28:17.331 "process": { 00:28:17.331 "type": "rebuild", 00:28:17.331 "target": "spare", 00:28:17.331 "progress": { 00:28:17.331 "blocks": 12288, 00:28:17.331 "percent": 18 00:28:17.331 } 00:28:17.331 }, 00:28:17.331 "base_bdevs_list": [ 00:28:17.331 { 00:28:17.331 "name": "spare", 00:28:17.331 "uuid": "41933a2c-362f-5405-a703-489623f3bc1d", 00:28:17.331 "is_configured": true, 00:28:17.331 "data_offset": 0, 00:28:17.331 "data_size": 65536 00:28:17.331 }, 00:28:17.331 { 00:28:17.331 "name": "BaseBdev2", 00:28:17.331 "uuid": "9dcd549d-1a15-59c7-b7e8-60dc05d5b4ed", 00:28:17.331 "is_configured": true, 00:28:17.331 "data_offset": 0, 00:28:17.331 "data_size": 65536 00:28:17.331 } 00:28:17.331 ] 00:28:17.331 }' 00:28:17.331 04:23:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:17.331 04:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:17.331 04:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:17.331 04:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:17.331 04:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:28:17.331 04:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:28:17.331 04:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:28:17.331 04:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:28:17.331 04:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=892 00:28:17.331 04:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:17.331 04:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:17.331 04:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:17.331 04:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:17.331 04:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:17.331 04:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:17.331 04:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:17.331 04:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:17.591 04:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:17.591 "name": "raid_bdev1", 00:28:17.591 "uuid": "0dd65018-780f-4c35-a5db-df6a9112dd06", 00:28:17.591 "strip_size_kb": 0, 00:28:17.591 "state": "online", 00:28:17.591 "raid_level": "raid1", 00:28:17.591 "superblock": false, 00:28:17.591 "num_base_bdevs": 2, 00:28:17.591 "num_base_bdevs_discovered": 2, 00:28:17.591 "num_base_bdevs_operational": 2, 00:28:17.591 "process": { 00:28:17.591 "type": "rebuild", 00:28:17.591 "target": "spare", 00:28:17.591 "progress": { 00:28:17.591 "blocks": 18432, 00:28:17.591 "percent": 28 00:28:17.591 } 00:28:17.591 }, 00:28:17.591 "base_bdevs_list": [ 00:28:17.591 { 00:28:17.591 "name": "spare", 00:28:17.591 "uuid": "41933a2c-362f-5405-a703-489623f3bc1d", 00:28:17.591 "is_configured": true, 00:28:17.591 "data_offset": 0, 00:28:17.591 "data_size": 65536 00:28:17.591 }, 00:28:17.591 { 00:28:17.591 "name": "BaseBdev2", 00:28:17.591 "uuid": "9dcd549d-1a15-59c7-b7e8-60dc05d5b4ed", 00:28:17.591 "is_configured": true, 00:28:17.591 "data_offset": 0, 00:28:17.591 "data_size": 65536 00:28:17.591 } 00:28:17.591 ] 00:28:17.591 }' 00:28:17.591 04:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:17.591 04:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:17.591 04:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:17.850 04:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:17.850 04:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:17.850 [2024-07-23 04:23:26.439672] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:28:18.109 [2024-07-23 04:23:26.659697] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:28:18.109 [2024-07-23 04:23:26.786564] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:28:18.678 04:23:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:18.678 04:23:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:18.678 04:23:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:18.678 04:23:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:18.678 04:23:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:18.678 04:23:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:18.678 04:23:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:18.678 04:23:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:18.937 [2024-07-23 04:23:27.461857] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:28:18.937 04:23:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:18.937 "name": "raid_bdev1", 00:28:18.937 "uuid": "0dd65018-780f-4c35-a5db-df6a9112dd06", 00:28:18.937 "strip_size_kb": 0, 00:28:18.937 "state": "online", 00:28:18.937 "raid_level": "raid1", 00:28:18.937 "superblock": false, 00:28:18.937 "num_base_bdevs": 2, 00:28:18.937 "num_base_bdevs_discovered": 2, 00:28:18.937 "num_base_bdevs_operational": 2, 00:28:18.937 "process": { 00:28:18.937 "type": "rebuild", 00:28:18.937 "target": "spare", 00:28:18.937 "progress": { 00:28:18.937 "blocks": 38912, 00:28:18.937 "percent": 59 00:28:18.937 } 00:28:18.937 }, 00:28:18.937 "base_bdevs_list": [ 00:28:18.937 { 00:28:18.937 "name": "spare", 00:28:18.937 "uuid": "41933a2c-362f-5405-a703-489623f3bc1d", 00:28:18.937 "is_configured": true, 00:28:18.937 "data_offset": 0, 00:28:18.937 "data_size": 65536 00:28:18.937 }, 00:28:18.937 { 00:28:18.937 "name": "BaseBdev2", 00:28:18.937 "uuid": "9dcd549d-1a15-59c7-b7e8-60dc05d5b4ed", 00:28:18.937 "is_configured": true, 00:28:18.937 "data_offset": 0, 00:28:18.937 "data_size": 65536 00:28:18.937 } 00:28:18.937 ] 00:28:18.937 }' 00:28:18.937 04:23:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:18.937 [2024-07-23 04:23:27.681045] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:28:18.937 04:23:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:18.937 04:23:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:19.196 04:23:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:19.197 04:23:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:19.765 [2024-07-23 04:23:28.466481] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:28:20.025 04:23:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:20.025 04:23:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:20.025 04:23:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:20.025 04:23:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:20.025 04:23:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:20.025 04:23:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:20.025 04:23:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:20.025 04:23:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:20.025 [2024-07-23 04:23:28.805292] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:28:20.025 [2024-07-23 04:23:28.805520] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:28:20.285 04:23:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:20.285 "name": "raid_bdev1", 00:28:20.285 "uuid": "0dd65018-780f-4c35-a5db-df6a9112dd06", 00:28:20.285 "strip_size_kb": 0, 00:28:20.285 "state": "online", 00:28:20.285 "raid_level": "raid1", 00:28:20.285 "superblock": false, 00:28:20.285 "num_base_bdevs": 2, 00:28:20.285 "num_base_bdevs_discovered": 2, 00:28:20.285 "num_base_bdevs_operational": 2, 00:28:20.285 "process": { 00:28:20.285 "type": "rebuild", 00:28:20.285 "target": "spare", 00:28:20.285 "progress": { 00:28:20.285 "blocks": 59392, 00:28:20.285 "percent": 90 00:28:20.285 } 00:28:20.285 }, 00:28:20.285 "base_bdevs_list": [ 00:28:20.285 { 00:28:20.285 "name": "spare", 00:28:20.285 "uuid": "41933a2c-362f-5405-a703-489623f3bc1d", 00:28:20.285 "is_configured": true, 00:28:20.285 "data_offset": 0, 00:28:20.285 "data_size": 65536 00:28:20.285 }, 00:28:20.285 { 00:28:20.285 "name": "BaseBdev2", 00:28:20.285 "uuid": "9dcd549d-1a15-59c7-b7e8-60dc05d5b4ed", 00:28:20.285 "is_configured": true, 00:28:20.285 "data_offset": 0, 00:28:20.285 "data_size": 65536 00:28:20.285 } 00:28:20.285 ] 00:28:20.285 }' 00:28:20.285 04:23:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:20.285 04:23:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:20.285 04:23:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:20.285 04:23:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:20.285 04:23:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:20.544 [2024-07-23 04:23:29.237468] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:20.804 [2024-07-23 04:23:29.345437] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:20.804 [2024-07-23 04:23:29.347226] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:21.373 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:21.373 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:21.373 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:21.373 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:21.373 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:21.373 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:21.373 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:21.373 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:21.633 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:21.633 "name": "raid_bdev1", 00:28:21.633 "uuid": "0dd65018-780f-4c35-a5db-df6a9112dd06", 00:28:21.633 "strip_size_kb": 0, 00:28:21.633 "state": "online", 00:28:21.633 "raid_level": "raid1", 00:28:21.633 "superblock": false, 00:28:21.633 "num_base_bdevs": 2, 00:28:21.633 "num_base_bdevs_discovered": 2, 00:28:21.633 "num_base_bdevs_operational": 2, 00:28:21.633 "base_bdevs_list": [ 00:28:21.633 { 00:28:21.633 "name": "spare", 00:28:21.633 "uuid": "41933a2c-362f-5405-a703-489623f3bc1d", 00:28:21.633 "is_configured": true, 00:28:21.633 "data_offset": 0, 00:28:21.633 "data_size": 65536 00:28:21.633 }, 00:28:21.633 { 00:28:21.633 "name": "BaseBdev2", 00:28:21.633 "uuid": "9dcd549d-1a15-59c7-b7e8-60dc05d5b4ed", 00:28:21.633 "is_configured": true, 00:28:21.633 "data_offset": 0, 00:28:21.633 "data_size": 65536 00:28:21.633 } 00:28:21.633 ] 00:28:21.633 }' 00:28:21.633 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:21.633 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:21.633 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:21.633 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:21.633 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:28:21.633 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:21.633 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:21.633 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:21.633 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:21.633 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:21.633 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:21.633 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:21.891 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:21.891 "name": "raid_bdev1", 00:28:21.891 "uuid": "0dd65018-780f-4c35-a5db-df6a9112dd06", 00:28:21.891 "strip_size_kb": 0, 00:28:21.891 "state": "online", 00:28:21.891 "raid_level": "raid1", 00:28:21.891 "superblock": false, 00:28:21.891 "num_base_bdevs": 2, 00:28:21.891 "num_base_bdevs_discovered": 2, 00:28:21.891 "num_base_bdevs_operational": 2, 00:28:21.891 "base_bdevs_list": [ 00:28:21.891 { 00:28:21.891 "name": "spare", 00:28:21.891 "uuid": "41933a2c-362f-5405-a703-489623f3bc1d", 00:28:21.891 "is_configured": true, 00:28:21.891 "data_offset": 0, 00:28:21.891 "data_size": 65536 00:28:21.891 }, 00:28:21.891 { 00:28:21.891 "name": "BaseBdev2", 00:28:21.891 "uuid": "9dcd549d-1a15-59c7-b7e8-60dc05d5b4ed", 00:28:21.891 "is_configured": true, 00:28:21.891 "data_offset": 0, 00:28:21.891 "data_size": 65536 00:28:21.891 } 00:28:21.891 ] 00:28:21.891 }' 00:28:21.891 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:21.891 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:21.891 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:22.150 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:22.150 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:22.150 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:22.150 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:22.150 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:22.150 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:22.150 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:22.150 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:22.150 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:22.150 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:22.150 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:22.150 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.150 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:22.150 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:22.150 "name": "raid_bdev1", 00:28:22.150 "uuid": "0dd65018-780f-4c35-a5db-df6a9112dd06", 00:28:22.150 "strip_size_kb": 0, 00:28:22.150 "state": "online", 00:28:22.150 "raid_level": "raid1", 00:28:22.150 "superblock": false, 00:28:22.150 "num_base_bdevs": 2, 00:28:22.150 "num_base_bdevs_discovered": 2, 00:28:22.150 "num_base_bdevs_operational": 2, 00:28:22.150 "base_bdevs_list": [ 00:28:22.150 { 00:28:22.150 "name": "spare", 00:28:22.150 "uuid": "41933a2c-362f-5405-a703-489623f3bc1d", 00:28:22.150 "is_configured": true, 00:28:22.150 "data_offset": 0, 00:28:22.150 "data_size": 65536 00:28:22.150 }, 00:28:22.150 { 00:28:22.150 "name": "BaseBdev2", 00:28:22.150 "uuid": "9dcd549d-1a15-59c7-b7e8-60dc05d5b4ed", 00:28:22.150 "is_configured": true, 00:28:22.150 "data_offset": 0, 00:28:22.150 "data_size": 65536 00:28:22.150 } 00:28:22.150 ] 00:28:22.150 }' 00:28:22.150 04:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:22.150 04:23:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:22.717 04:23:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:22.975 [2024-07-23 04:23:31.707509] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:22.975 [2024-07-23 04:23:31.707549] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:23.233 00:28:23.233 Latency(us) 00:28:23.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.233 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:28:23.233 raid_bdev1 : 11.35 92.85 278.54 0.00 0.00 15051.69 332.60 110729.63 00:28:23.233 =================================================================================================================== 00:28:23.233 Total : 92.85 278.54 0.00 0.00 15051.69 332.60 110729.63 00:28:23.233 [2024-07-23 04:23:31.790615] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:23.234 [2024-07-23 04:23:31.790685] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:23.234 [2024-07-23 04:23:31.790780] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:23.234 [2024-07-23 04:23:31.790796] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:28:23.234 0 00:28:23.234 04:23:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:23.234 04:23:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:28:23.492 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:28:23.492 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:28:23.492 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:28:23.492 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:28:23.492 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:23.492 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:28:23.492 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:23.492 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:23.492 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:23.492 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:28:23.492 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:23.492 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:23.492 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:28:23.492 /dev/nbd0 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:23.751 1+0 records in 00:28:23.751 1+0 records out 00:28:23.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296694 s, 13.8 MB/s 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:23.751 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:28:23.751 /dev/nbd1 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:24.010 1+0 records in 00:28:24.010 1+0 records out 00:28:24.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309188 s, 13.2 MB/s 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:24.010 04:23:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:24.269 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:24.269 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:24.269 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:24.269 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:24.269 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:24.269 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:24.269 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:28:24.269 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:24.269 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:24.269 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:24.269 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:24.269 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:24.269 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:28:24.269 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:24.269 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:24.528 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:24.528 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:24.528 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:24.528 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:24.528 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:24.528 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:24.785 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:28:24.785 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:24.785 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:28:24.785 04:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 2770224 00:28:24.785 04:23:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 2770224 ']' 00:28:24.785 04:23:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 2770224 00:28:24.785 04:23:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:28:24.785 04:23:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:24.785 04:23:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2770224 00:28:24.785 04:23:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:24.785 04:23:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:24.785 04:23:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2770224' 00:28:24.785 killing process with pid 2770224 00:28:24.785 04:23:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 2770224 00:28:24.785 Received shutdown signal, test time was about 12.957482 seconds 00:28:24.785 00:28:24.785 Latency(us) 00:28:24.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.785 =================================================================================================================== 00:28:24.785 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:24.785 [2024-07-23 04:23:33.369092] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:24.785 04:23:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 2770224 00:28:25.043 [2024-07-23 04:23:33.595535] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:28:26.948 00:28:26.948 real 0m19.462s 00:28:26.948 user 0m27.973s 00:28:26.948 sys 0m2.851s 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:26.948 ************************************ 00:28:26.948 END TEST raid_rebuild_test_io 00:28:26.948 ************************************ 00:28:26.948 04:23:35 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:28:26.948 04:23:35 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:28:26.948 04:23:35 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:28:26.948 04:23:35 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:26.948 04:23:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:26.948 ************************************ 00:28:26.948 START TEST raid_rebuild_test_sb_io 00:28:26.948 ************************************ 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true true true 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2773769 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2773769 /var/tmp/spdk-raid.sock 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 2773769 ']' 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:26.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:26.948 04:23:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:26.948 [2024-07-23 04:23:35.593100] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:28:26.948 [2024-07-23 04:23:35.593229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2773769 ] 00:28:26.948 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:26.948 Zero copy mechanism will not be used. 00:28:26.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:26.948 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:26.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:26.948 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:26.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:26.948 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:26.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:26.948 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:26.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:26.948 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:26.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:26.948 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:26.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:26.948 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:26.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:26.948 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:26.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:26.948 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:26.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:26.948 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:26.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:26.948 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:26.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:26.948 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:26.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:26.948 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:26.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:26.948 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:26.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:26.948 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:26.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:26.948 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:27.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:27.208 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:27.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:27.208 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:27.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:27.208 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:27.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:27.208 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:27.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:27.208 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:27.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:27.208 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:27.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:27.208 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:27.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:27.208 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:27.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:27.208 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:27.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:27.208 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:27.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:27.208 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:27.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:27.208 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:27.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:27.208 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:27.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:27.208 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:27.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:27.208 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:27.208 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:27.208 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:27.208 [2024-07-23 04:23:35.817878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.467 [2024-07-23 04:23:36.078967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.726 [2024-07-23 04:23:36.416974] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:27.726 [2024-07-23 04:23:36.417014] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:27.985 04:23:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:27.985 04:23:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:28:27.985 04:23:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:27.985 04:23:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:28.243 BaseBdev1_malloc 00:28:28.243 04:23:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:28.502 [2024-07-23 04:23:37.088360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:28.502 [2024-07-23 04:23:37.088428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:28.502 [2024-07-23 04:23:37.088460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:28:28.502 [2024-07-23 04:23:37.088480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:28.502 [2024-07-23 04:23:37.091233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:28.502 [2024-07-23 04:23:37.091272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:28.502 BaseBdev1 00:28:28.502 04:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:28.502 04:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:28.761 BaseBdev2_malloc 00:28:28.761 04:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:29.019 [2024-07-23 04:23:37.600512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:29.019 [2024-07-23 04:23:37.600575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:29.019 [2024-07-23 04:23:37.600601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:28:29.019 [2024-07-23 04:23:37.600622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:29.019 [2024-07-23 04:23:37.603337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:29.019 [2024-07-23 04:23:37.603373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:29.019 BaseBdev2 00:28:29.019 04:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:29.278 spare_malloc 00:28:29.278 04:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:29.536 spare_delay 00:28:29.536 04:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:29.795 [2024-07-23 04:23:38.340162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:29.795 [2024-07-23 04:23:38.340215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:29.795 [2024-07-23 04:23:38.340241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:28:29.795 [2024-07-23 04:23:38.340259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:29.795 [2024-07-23 04:23:38.342979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:29.795 [2024-07-23 04:23:38.343015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:29.795 spare 00:28:29.795 04:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:29.795 [2024-07-23 04:23:38.568802] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:29.795 [2024-07-23 04:23:38.571103] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:29.795 [2024-07-23 04:23:38.571328] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:28:29.795 [2024-07-23 04:23:38.571353] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:29.795 [2024-07-23 04:23:38.571709] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:28:29.795 [2024-07-23 04:23:38.571956] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:28:29.795 [2024-07-23 04:23:38.571972] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:28:29.795 [2024-07-23 04:23:38.572177] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:30.053 04:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:30.053 04:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:30.053 04:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:30.053 04:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:30.053 04:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:30.053 04:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:30.053 04:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:30.053 04:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:30.053 04:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:30.053 04:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:30.053 04:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.053 04:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.053 04:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:30.053 "name": "raid_bdev1", 00:28:30.053 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:30.053 "strip_size_kb": 0, 00:28:30.053 "state": "online", 00:28:30.053 "raid_level": "raid1", 00:28:30.053 "superblock": true, 00:28:30.053 "num_base_bdevs": 2, 00:28:30.053 "num_base_bdevs_discovered": 2, 00:28:30.053 "num_base_bdevs_operational": 2, 00:28:30.053 "base_bdevs_list": [ 00:28:30.053 { 00:28:30.053 "name": "BaseBdev1", 00:28:30.053 "uuid": "06d2cf45-81a4-5c50-9496-d99326836ddd", 00:28:30.053 "is_configured": true, 00:28:30.053 "data_offset": 2048, 00:28:30.053 "data_size": 63488 00:28:30.053 }, 00:28:30.053 { 00:28:30.053 "name": "BaseBdev2", 00:28:30.053 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:30.053 "is_configured": true, 00:28:30.053 "data_offset": 2048, 00:28:30.053 "data_size": 63488 00:28:30.053 } 00:28:30.053 ] 00:28:30.053 }' 00:28:30.053 04:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:30.053 04:23:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:30.618 04:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:30.618 04:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:28:30.876 [2024-07-23 04:23:39.595921] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:30.876 04:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:28:30.876 04:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.876 04:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:31.134 04:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:28:31.134 04:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:28:31.134 04:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:31.134 04:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:28:31.392 [2024-07-23 04:23:39.956994] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:28:31.392 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:31.392 Zero copy mechanism will not be used. 00:28:31.392 Running I/O for 60 seconds... 00:28:31.392 [2024-07-23 04:23:40.068799] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:31.392 [2024-07-23 04:23:40.084439] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000108b0 00:28:31.392 04:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:31.392 04:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:31.392 04:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:31.392 04:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:31.392 04:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:31.392 04:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:31.392 04:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:31.392 04:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:31.392 04:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:31.392 04:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:31.392 04:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.392 04:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:31.650 04:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:31.650 "name": "raid_bdev1", 00:28:31.650 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:31.650 "strip_size_kb": 0, 00:28:31.650 "state": "online", 00:28:31.650 "raid_level": "raid1", 00:28:31.650 "superblock": true, 00:28:31.650 "num_base_bdevs": 2, 00:28:31.650 "num_base_bdevs_discovered": 1, 00:28:31.650 "num_base_bdevs_operational": 1, 00:28:31.651 "base_bdevs_list": [ 00:28:31.651 { 00:28:31.651 "name": null, 00:28:31.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:31.651 "is_configured": false, 00:28:31.651 "data_offset": 2048, 00:28:31.651 "data_size": 63488 00:28:31.651 }, 00:28:31.651 { 00:28:31.651 "name": "BaseBdev2", 00:28:31.651 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:31.651 "is_configured": true, 00:28:31.651 "data_offset": 2048, 00:28:31.651 "data_size": 63488 00:28:31.651 } 00:28:31.651 ] 00:28:31.651 }' 00:28:31.651 04:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:31.651 04:23:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:32.217 04:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:32.476 [2024-07-23 04:23:41.161983] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:32.476 04:23:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:28:32.476 [2024-07-23 04:23:41.241039] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:28:32.476 [2024-07-23 04:23:41.243448] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:32.734 [2024-07-23 04:23:41.368925] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:32.734 [2024-07-23 04:23:41.369389] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:32.992 [2024-07-23 04:23:41.605317] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:32.992 [2024-07-23 04:23:41.605503] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:33.257 [2024-07-23 04:23:41.918188] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:33.257 [2024-07-23 04:23:41.918573] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:33.545 [2024-07-23 04:23:42.137904] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:33.545 [2024-07-23 04:23:42.138078] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:33.545 04:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:33.545 04:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:33.545 04:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:33.545 04:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:33.545 04:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:33.545 04:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:33.545 04:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:33.804 04:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:33.804 "name": "raid_bdev1", 00:28:33.804 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:33.804 "strip_size_kb": 0, 00:28:33.804 "state": "online", 00:28:33.804 "raid_level": "raid1", 00:28:33.804 "superblock": true, 00:28:33.804 "num_base_bdevs": 2, 00:28:33.804 "num_base_bdevs_discovered": 2, 00:28:33.804 "num_base_bdevs_operational": 2, 00:28:33.804 "process": { 00:28:33.804 "type": "rebuild", 00:28:33.804 "target": "spare", 00:28:33.804 "progress": { 00:28:33.804 "blocks": 12288, 00:28:33.804 "percent": 19 00:28:33.804 } 00:28:33.804 }, 00:28:33.804 "base_bdevs_list": [ 00:28:33.804 { 00:28:33.804 "name": "spare", 00:28:33.804 "uuid": "478dc2c6-3bb2-5fad-a1a4-39366ce0ba78", 00:28:33.804 "is_configured": true, 00:28:33.804 "data_offset": 2048, 00:28:33.804 "data_size": 63488 00:28:33.804 }, 00:28:33.804 { 00:28:33.804 "name": "BaseBdev2", 00:28:33.804 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:33.804 "is_configured": true, 00:28:33.804 "data_offset": 2048, 00:28:33.804 "data_size": 63488 00:28:33.804 } 00:28:33.804 ] 00:28:33.804 }' 00:28:33.804 04:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:33.804 [2024-07-23 04:23:42.486668] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:33.804 04:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:33.804 04:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:33.804 04:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:33.804 04:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:34.371 [2024-07-23 04:23:42.931341] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:34.371 [2024-07-23 04:23:43.020866] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:34.371 [2024-07-23 04:23:43.058389] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:28:34.629 [2024-07-23 04:23:43.175688] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:34.629 [2024-07-23 04:23:43.185482] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:34.629 [2024-07-23 04:23:43.185518] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:34.629 [2024-07-23 04:23:43.185537] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:34.629 [2024-07-23 04:23:43.234081] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000108b0 00:28:34.629 04:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:34.629 04:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:34.629 04:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:34.629 04:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:34.629 04:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:34.629 04:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:34.629 04:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:34.629 04:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:34.629 04:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:34.629 04:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:34.629 04:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:34.630 04:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:34.888 04:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:34.888 "name": "raid_bdev1", 00:28:34.888 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:34.888 "strip_size_kb": 0, 00:28:34.888 "state": "online", 00:28:34.888 "raid_level": "raid1", 00:28:34.888 "superblock": true, 00:28:34.888 "num_base_bdevs": 2, 00:28:34.888 "num_base_bdevs_discovered": 1, 00:28:34.888 "num_base_bdevs_operational": 1, 00:28:34.888 "base_bdevs_list": [ 00:28:34.888 { 00:28:34.888 "name": null, 00:28:34.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:34.888 "is_configured": false, 00:28:34.888 "data_offset": 2048, 00:28:34.888 "data_size": 63488 00:28:34.888 }, 00:28:34.888 { 00:28:34.888 "name": "BaseBdev2", 00:28:34.888 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:34.888 "is_configured": true, 00:28:34.888 "data_offset": 2048, 00:28:34.888 "data_size": 63488 00:28:34.888 } 00:28:34.888 ] 00:28:34.888 }' 00:28:34.888 04:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:34.888 04:23:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:35.454 04:23:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:35.454 04:23:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:35.454 04:23:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:35.454 04:23:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:35.454 04:23:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:35.454 04:23:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.454 04:23:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.712 04:23:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:35.712 "name": "raid_bdev1", 00:28:35.712 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:35.712 "strip_size_kb": 0, 00:28:35.712 "state": "online", 00:28:35.712 "raid_level": "raid1", 00:28:35.712 "superblock": true, 00:28:35.712 "num_base_bdevs": 2, 00:28:35.712 "num_base_bdevs_discovered": 1, 00:28:35.712 "num_base_bdevs_operational": 1, 00:28:35.712 "base_bdevs_list": [ 00:28:35.712 { 00:28:35.712 "name": null, 00:28:35.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:35.712 "is_configured": false, 00:28:35.712 "data_offset": 2048, 00:28:35.712 "data_size": 63488 00:28:35.712 }, 00:28:35.712 { 00:28:35.712 "name": "BaseBdev2", 00:28:35.712 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:35.712 "is_configured": true, 00:28:35.712 "data_offset": 2048, 00:28:35.712 "data_size": 63488 00:28:35.712 } 00:28:35.712 ] 00:28:35.712 }' 00:28:35.712 04:23:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:35.712 04:23:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:35.712 04:23:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:35.712 04:23:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:35.712 04:23:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:35.971 [2024-07-23 04:23:44.620461] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:35.971 04:23:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:35.971 [2024-07-23 04:23:44.719021] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010a50 00:28:35.971 [2024-07-23 04:23:44.721426] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:36.229 [2024-07-23 04:23:44.890669] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:36.229 [2024-07-23 04:23:45.009650] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:36.229 [2024-07-23 04:23:45.009848] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:36.796 [2024-07-23 04:23:45.358442] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:37.055 [2024-07-23 04:23:45.585768] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:37.055 [2024-07-23 04:23:45.585999] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:37.055 04:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:37.055 04:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:37.055 04:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:37.055 04:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:37.055 04:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:37.055 04:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:37.055 04:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:37.314 [2024-07-23 04:23:45.935612] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:37.314 04:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:37.314 "name": "raid_bdev1", 00:28:37.314 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:37.314 "strip_size_kb": 0, 00:28:37.314 "state": "online", 00:28:37.314 "raid_level": "raid1", 00:28:37.314 "superblock": true, 00:28:37.314 "num_base_bdevs": 2, 00:28:37.314 "num_base_bdevs_discovered": 2, 00:28:37.314 "num_base_bdevs_operational": 2, 00:28:37.314 "process": { 00:28:37.314 "type": "rebuild", 00:28:37.314 "target": "spare", 00:28:37.314 "progress": { 00:28:37.314 "blocks": 12288, 00:28:37.314 "percent": 19 00:28:37.314 } 00:28:37.314 }, 00:28:37.314 "base_bdevs_list": [ 00:28:37.314 { 00:28:37.314 "name": "spare", 00:28:37.314 "uuid": "478dc2c6-3bb2-5fad-a1a4-39366ce0ba78", 00:28:37.314 "is_configured": true, 00:28:37.314 "data_offset": 2048, 00:28:37.314 "data_size": 63488 00:28:37.314 }, 00:28:37.314 { 00:28:37.314 "name": "BaseBdev2", 00:28:37.314 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:37.314 "is_configured": true, 00:28:37.314 "data_offset": 2048, 00:28:37.314 "data_size": 63488 00:28:37.314 } 00:28:37.314 ] 00:28:37.314 }' 00:28:37.314 04:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:37.314 04:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:37.314 04:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:37.314 04:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:37.314 04:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:28:37.314 04:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:28:37.314 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:28:37.314 04:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:28:37.314 04:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:28:37.314 04:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:28:37.314 04:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=912 00:28:37.314 04:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:37.314 04:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:37.314 04:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:37.314 04:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:37.314 04:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:37.314 04:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:37.314 04:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:37.314 04:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:37.573 [2024-07-23 04:23:46.156407] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:37.573 04:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:37.573 "name": "raid_bdev1", 00:28:37.573 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:37.573 "strip_size_kb": 0, 00:28:37.573 "state": "online", 00:28:37.573 "raid_level": "raid1", 00:28:37.573 "superblock": true, 00:28:37.573 "num_base_bdevs": 2, 00:28:37.573 "num_base_bdevs_discovered": 2, 00:28:37.573 "num_base_bdevs_operational": 2, 00:28:37.573 "process": { 00:28:37.573 "type": "rebuild", 00:28:37.573 "target": "spare", 00:28:37.573 "progress": { 00:28:37.573 "blocks": 16384, 00:28:37.573 "percent": 25 00:28:37.573 } 00:28:37.573 }, 00:28:37.573 "base_bdevs_list": [ 00:28:37.573 { 00:28:37.573 "name": "spare", 00:28:37.573 "uuid": "478dc2c6-3bb2-5fad-a1a4-39366ce0ba78", 00:28:37.573 "is_configured": true, 00:28:37.573 "data_offset": 2048, 00:28:37.573 "data_size": 63488 00:28:37.573 }, 00:28:37.573 { 00:28:37.573 "name": "BaseBdev2", 00:28:37.573 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:37.573 "is_configured": true, 00:28:37.573 "data_offset": 2048, 00:28:37.573 "data_size": 63488 00:28:37.573 } 00:28:37.573 ] 00:28:37.573 }' 00:28:37.573 04:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:37.573 04:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:37.573 04:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:37.573 04:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:37.573 04:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:37.832 [2024-07-23 04:23:46.538266] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:38.090 [2024-07-23 04:23:46.656760] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:28:38.348 [2024-07-23 04:23:46.900779] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:28:38.348 [2024-07-23 04:23:47.112992] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:28:38.348 [2024-07-23 04:23:47.113224] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:28:38.607 04:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:38.607 04:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:38.607 04:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:38.607 04:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:38.607 04:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:38.607 04:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:38.607 04:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.607 04:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.865 [2024-07-23 04:23:47.546996] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:28:38.865 04:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:38.865 "name": "raid_bdev1", 00:28:38.865 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:38.865 "strip_size_kb": 0, 00:28:38.865 "state": "online", 00:28:38.865 "raid_level": "raid1", 00:28:38.865 "superblock": true, 00:28:38.865 "num_base_bdevs": 2, 00:28:38.865 "num_base_bdevs_discovered": 2, 00:28:38.865 "num_base_bdevs_operational": 2, 00:28:38.865 "process": { 00:28:38.865 "type": "rebuild", 00:28:38.865 "target": "spare", 00:28:38.865 "progress": { 00:28:38.865 "blocks": 34816, 00:28:38.865 "percent": 54 00:28:38.866 } 00:28:38.866 }, 00:28:38.866 "base_bdevs_list": [ 00:28:38.866 { 00:28:38.866 "name": "spare", 00:28:38.866 "uuid": "478dc2c6-3bb2-5fad-a1a4-39366ce0ba78", 00:28:38.866 "is_configured": true, 00:28:38.866 "data_offset": 2048, 00:28:38.866 "data_size": 63488 00:28:38.866 }, 00:28:38.866 { 00:28:38.866 "name": "BaseBdev2", 00:28:38.866 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:38.866 "is_configured": true, 00:28:38.866 "data_offset": 2048, 00:28:38.866 "data_size": 63488 00:28:38.866 } 00:28:38.866 ] 00:28:38.866 }' 00:28:38.866 04:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:38.866 04:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:38.866 04:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:39.124 04:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:39.124 04:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:39.381 [2024-07-23 04:23:47.970445] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:28:39.381 [2024-07-23 04:23:47.970657] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:28:39.639 [2024-07-23 04:23:48.218996] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:28:39.897 04:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:39.897 04:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:39.897 04:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:39.897 04:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:39.897 04:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:40.156 04:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:40.156 04:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.156 04:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:40.156 [2024-07-23 04:23:48.792914] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:28:40.156 04:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:40.156 "name": "raid_bdev1", 00:28:40.156 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:40.156 "strip_size_kb": 0, 00:28:40.156 "state": "online", 00:28:40.156 "raid_level": "raid1", 00:28:40.156 "superblock": true, 00:28:40.156 "num_base_bdevs": 2, 00:28:40.156 "num_base_bdevs_discovered": 2, 00:28:40.156 "num_base_bdevs_operational": 2, 00:28:40.156 "process": { 00:28:40.156 "type": "rebuild", 00:28:40.156 "target": "spare", 00:28:40.156 "progress": { 00:28:40.156 "blocks": 53248, 00:28:40.156 "percent": 83 00:28:40.156 } 00:28:40.156 }, 00:28:40.156 "base_bdevs_list": [ 00:28:40.156 { 00:28:40.156 "name": "spare", 00:28:40.156 "uuid": "478dc2c6-3bb2-5fad-a1a4-39366ce0ba78", 00:28:40.156 "is_configured": true, 00:28:40.156 "data_offset": 2048, 00:28:40.156 "data_size": 63488 00:28:40.156 }, 00:28:40.156 { 00:28:40.156 "name": "BaseBdev2", 00:28:40.156 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:40.156 "is_configured": true, 00:28:40.156 "data_offset": 2048, 00:28:40.156 "data_size": 63488 00:28:40.156 } 00:28:40.156 ] 00:28:40.156 }' 00:28:40.156 04:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:40.415 04:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:40.415 04:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:40.415 04:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:40.415 04:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:40.672 [2024-07-23 04:23:49.351027] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:40.672 [2024-07-23 04:23:49.451278] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:40.672 [2024-07-23 04:23:49.452984] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:41.605 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:41.605 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:41.605 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:41.605 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:41.605 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:41.605 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:41.605 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.605 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:41.605 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:41.605 "name": "raid_bdev1", 00:28:41.606 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:41.606 "strip_size_kb": 0, 00:28:41.606 "state": "online", 00:28:41.606 "raid_level": "raid1", 00:28:41.606 "superblock": true, 00:28:41.606 "num_base_bdevs": 2, 00:28:41.606 "num_base_bdevs_discovered": 2, 00:28:41.606 "num_base_bdevs_operational": 2, 00:28:41.606 "base_bdevs_list": [ 00:28:41.606 { 00:28:41.606 "name": "spare", 00:28:41.606 "uuid": "478dc2c6-3bb2-5fad-a1a4-39366ce0ba78", 00:28:41.606 "is_configured": true, 00:28:41.606 "data_offset": 2048, 00:28:41.606 "data_size": 63488 00:28:41.606 }, 00:28:41.606 { 00:28:41.606 "name": "BaseBdev2", 00:28:41.606 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:41.606 "is_configured": true, 00:28:41.606 "data_offset": 2048, 00:28:41.606 "data_size": 63488 00:28:41.606 } 00:28:41.606 ] 00:28:41.606 }' 00:28:41.606 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:41.606 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:41.606 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:41.606 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:41.606 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:28:41.606 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:41.606 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:41.606 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:41.606 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:41.606 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:41.606 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.606 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:41.865 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:41.865 "name": "raid_bdev1", 00:28:41.865 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:41.865 "strip_size_kb": 0, 00:28:41.865 "state": "online", 00:28:41.865 "raid_level": "raid1", 00:28:41.865 "superblock": true, 00:28:41.865 "num_base_bdevs": 2, 00:28:41.865 "num_base_bdevs_discovered": 2, 00:28:41.865 "num_base_bdevs_operational": 2, 00:28:41.865 "base_bdevs_list": [ 00:28:41.865 { 00:28:41.865 "name": "spare", 00:28:41.865 "uuid": "478dc2c6-3bb2-5fad-a1a4-39366ce0ba78", 00:28:41.865 "is_configured": true, 00:28:41.865 "data_offset": 2048, 00:28:41.865 "data_size": 63488 00:28:41.865 }, 00:28:41.865 { 00:28:41.865 "name": "BaseBdev2", 00:28:41.865 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:41.865 "is_configured": true, 00:28:41.865 "data_offset": 2048, 00:28:41.865 "data_size": 63488 00:28:41.865 } 00:28:41.865 ] 00:28:41.865 }' 00:28:41.865 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:41.865 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:41.865 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:42.123 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:42.123 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:42.123 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:42.123 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:42.123 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:42.123 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:42.123 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:42.123 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:42.123 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:42.123 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:42.123 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:42.123 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.123 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:42.382 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:42.382 "name": "raid_bdev1", 00:28:42.382 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:42.382 "strip_size_kb": 0, 00:28:42.382 "state": "online", 00:28:42.382 "raid_level": "raid1", 00:28:42.382 "superblock": true, 00:28:42.382 "num_base_bdevs": 2, 00:28:42.382 "num_base_bdevs_discovered": 2, 00:28:42.382 "num_base_bdevs_operational": 2, 00:28:42.382 "base_bdevs_list": [ 00:28:42.382 { 00:28:42.382 "name": "spare", 00:28:42.382 "uuid": "478dc2c6-3bb2-5fad-a1a4-39366ce0ba78", 00:28:42.382 "is_configured": true, 00:28:42.382 "data_offset": 2048, 00:28:42.382 "data_size": 63488 00:28:42.382 }, 00:28:42.382 { 00:28:42.382 "name": "BaseBdev2", 00:28:42.382 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:42.382 "is_configured": true, 00:28:42.382 "data_offset": 2048, 00:28:42.382 "data_size": 63488 00:28:42.382 } 00:28:42.382 ] 00:28:42.382 }' 00:28:42.382 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:42.382 04:23:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:42.951 04:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:42.951 [2024-07-23 04:23:51.678015] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:42.951 [2024-07-23 04:23:51.678056] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:43.209 00:28:43.209 Latency(us) 00:28:43.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.209 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:28:43.209 raid_bdev1 : 11.77 99.29 297.86 0.00 0.00 13371.95 330.96 110729.63 00:28:43.209 =================================================================================================================== 00:28:43.209 Total : 99.29 297.86 0.00 0.00 13371.95 330.96 110729.63 00:28:43.209 [2024-07-23 04:23:51.791379] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:43.209 [2024-07-23 04:23:51.791427] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:43.209 [2024-07-23 04:23:51.791527] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:43.209 [2024-07-23 04:23:51.791545] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:28:43.209 0 00:28:43.209 04:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:43.209 04:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:28:43.468 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:28:43.468 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:28:43.468 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:28:43.468 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:28:43.468 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:43.468 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:28:43.468 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:43.468 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:43.468 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:43.468 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:28:43.468 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:43.468 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:43.468 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:28:43.726 /dev/nbd0 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:43.726 1+0 records in 00:28:43.726 1+0 records out 00:28:43.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292748 s, 14.0 MB/s 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:43.726 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:28:43.984 /dev/nbd1 00:28:43.984 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:43.984 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:43.984 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:28:43.984 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:28:43.984 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:43.984 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:43.984 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:28:43.984 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:28:43.984 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:43.984 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:43.984 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:43.984 1+0 records in 00:28:43.984 1+0 records out 00:28:43.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300542 s, 13.6 MB/s 00:28:43.984 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:43.984 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:28:43.984 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:43.984 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:43.984 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:28:43.984 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:43.984 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:43.984 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:44.241 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:28:44.241 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:44.241 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:28:44.241 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:44.241 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:28:44.241 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:44.241 04:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:44.500 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:44.500 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:44.500 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:44.500 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:44.500 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:44.500 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:44.500 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:28:44.500 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:44.500 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:44.500 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:44.500 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:44.500 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:44.500 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:28:44.500 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:44.500 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:44.500 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:44.757 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:44.757 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:44.757 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:44.757 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:44.757 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:44.757 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:28:44.757 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:44.757 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:28:44.757 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:44.757 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:45.015 [2024-07-23 04:23:53.730092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:45.015 [2024-07-23 04:23:53.730159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:45.015 [2024-07-23 04:23:53.730190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043580 00:28:45.015 [2024-07-23 04:23:53.730206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:45.015 [2024-07-23 04:23:53.733004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:45.015 [2024-07-23 04:23:53.733038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:45.015 [2024-07-23 04:23:53.733158] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:45.015 [2024-07-23 04:23:53.733226] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:45.015 [2024-07-23 04:23:53.733457] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:45.015 spare 00:28:45.015 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:45.015 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:45.015 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:45.015 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:45.015 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:45.015 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:45.015 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:45.015 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:45.015 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:45.015 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:45.015 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:45.015 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:45.292 [2024-07-23 04:23:53.833806] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043b80 00:28:45.292 [2024-07-23 04:23:53.833839] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:45.292 [2024-07-23 04:23:53.834182] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00001f930 00:28:45.292 [2024-07-23 04:23:53.834475] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043b80 00:28:45.292 [2024-07-23 04:23:53.834492] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043b80 00:28:45.292 [2024-07-23 04:23:53.834720] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:45.292 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:45.292 "name": "raid_bdev1", 00:28:45.292 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:45.292 "strip_size_kb": 0, 00:28:45.292 "state": "online", 00:28:45.292 "raid_level": "raid1", 00:28:45.292 "superblock": true, 00:28:45.292 "num_base_bdevs": 2, 00:28:45.292 "num_base_bdevs_discovered": 2, 00:28:45.292 "num_base_bdevs_operational": 2, 00:28:45.292 "base_bdevs_list": [ 00:28:45.292 { 00:28:45.292 "name": "spare", 00:28:45.292 "uuid": "478dc2c6-3bb2-5fad-a1a4-39366ce0ba78", 00:28:45.292 "is_configured": true, 00:28:45.292 "data_offset": 2048, 00:28:45.292 "data_size": 63488 00:28:45.292 }, 00:28:45.292 { 00:28:45.292 "name": "BaseBdev2", 00:28:45.292 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:45.292 "is_configured": true, 00:28:45.292 "data_offset": 2048, 00:28:45.292 "data_size": 63488 00:28:45.292 } 00:28:45.292 ] 00:28:45.292 }' 00:28:45.292 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:45.292 04:23:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:45.856 04:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:45.856 04:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:45.856 04:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:45.856 04:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:45.856 04:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:45.856 04:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:45.856 04:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:46.112 04:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:46.112 "name": "raid_bdev1", 00:28:46.112 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:46.112 "strip_size_kb": 0, 00:28:46.112 "state": "online", 00:28:46.112 "raid_level": "raid1", 00:28:46.112 "superblock": true, 00:28:46.112 "num_base_bdevs": 2, 00:28:46.112 "num_base_bdevs_discovered": 2, 00:28:46.112 "num_base_bdevs_operational": 2, 00:28:46.113 "base_bdevs_list": [ 00:28:46.113 { 00:28:46.113 "name": "spare", 00:28:46.113 "uuid": "478dc2c6-3bb2-5fad-a1a4-39366ce0ba78", 00:28:46.113 "is_configured": true, 00:28:46.113 "data_offset": 2048, 00:28:46.113 "data_size": 63488 00:28:46.113 }, 00:28:46.113 { 00:28:46.113 "name": "BaseBdev2", 00:28:46.113 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:46.113 "is_configured": true, 00:28:46.113 "data_offset": 2048, 00:28:46.113 "data_size": 63488 00:28:46.113 } 00:28:46.113 ] 00:28:46.113 }' 00:28:46.113 04:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:46.113 04:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:46.113 04:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:46.113 04:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:46.113 04:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:46.113 04:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:46.369 04:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:28:46.369 04:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:46.627 [2024-07-23 04:23:55.302950] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:46.627 04:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:46.627 04:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:46.627 04:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:46.627 04:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:46.627 04:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:46.627 04:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:46.627 04:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:46.627 04:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:46.627 04:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:46.627 04:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:46.627 04:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:46.627 04:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:46.885 04:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:46.885 "name": "raid_bdev1", 00:28:46.885 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:46.885 "strip_size_kb": 0, 00:28:46.885 "state": "online", 00:28:46.885 "raid_level": "raid1", 00:28:46.885 "superblock": true, 00:28:46.885 "num_base_bdevs": 2, 00:28:46.885 "num_base_bdevs_discovered": 1, 00:28:46.885 "num_base_bdevs_operational": 1, 00:28:46.885 "base_bdevs_list": [ 00:28:46.885 { 00:28:46.885 "name": null, 00:28:46.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:46.885 "is_configured": false, 00:28:46.885 "data_offset": 2048, 00:28:46.885 "data_size": 63488 00:28:46.885 }, 00:28:46.885 { 00:28:46.885 "name": "BaseBdev2", 00:28:46.885 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:46.885 "is_configured": true, 00:28:46.885 "data_offset": 2048, 00:28:46.885 "data_size": 63488 00:28:46.885 } 00:28:46.885 ] 00:28:46.885 }' 00:28:46.885 04:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:46.885 04:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:47.450 04:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:47.709 [2024-07-23 04:23:56.334345] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:47.709 [2024-07-23 04:23:56.334566] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:47.709 [2024-07-23 04:23:56.334594] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:47.709 [2024-07-23 04:23:56.334641] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:47.709 [2024-07-23 04:23:56.360445] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00001fa00 00:28:47.709 [2024-07-23 04:23:56.362815] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:47.709 04:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:28:48.642 04:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:48.642 04:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:48.642 04:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:48.642 04:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:48.642 04:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:48.642 04:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:48.642 04:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:48.900 04:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:48.900 "name": "raid_bdev1", 00:28:48.900 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:48.900 "strip_size_kb": 0, 00:28:48.900 "state": "online", 00:28:48.900 "raid_level": "raid1", 00:28:48.900 "superblock": true, 00:28:48.900 "num_base_bdevs": 2, 00:28:48.900 "num_base_bdevs_discovered": 2, 00:28:48.900 "num_base_bdevs_operational": 2, 00:28:48.900 "process": { 00:28:48.900 "type": "rebuild", 00:28:48.900 "target": "spare", 00:28:48.900 "progress": { 00:28:48.900 "blocks": 24576, 00:28:48.900 "percent": 38 00:28:48.900 } 00:28:48.900 }, 00:28:48.900 "base_bdevs_list": [ 00:28:48.900 { 00:28:48.900 "name": "spare", 00:28:48.900 "uuid": "478dc2c6-3bb2-5fad-a1a4-39366ce0ba78", 00:28:48.900 "is_configured": true, 00:28:48.900 "data_offset": 2048, 00:28:48.900 "data_size": 63488 00:28:48.900 }, 00:28:48.900 { 00:28:48.900 "name": "BaseBdev2", 00:28:48.900 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:48.900 "is_configured": true, 00:28:48.900 "data_offset": 2048, 00:28:48.900 "data_size": 63488 00:28:48.900 } 00:28:48.900 ] 00:28:48.900 }' 00:28:48.900 04:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:48.900 04:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:48.900 04:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:49.158 04:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:49.158 04:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:49.158 [2024-07-23 04:23:57.912047] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:49.416 [2024-07-23 04:23:57.976024] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:49.416 [2024-07-23 04:23:57.976108] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:49.416 [2024-07-23 04:23:57.976130] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:49.416 [2024-07-23 04:23:57.976154] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:49.416 04:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:49.416 04:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:49.416 04:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:49.416 04:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:49.416 04:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:49.416 04:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:49.416 04:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:49.416 04:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:49.416 04:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:49.416 04:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:49.416 04:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:49.416 04:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:49.674 04:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:49.674 "name": "raid_bdev1", 00:28:49.674 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:49.674 "strip_size_kb": 0, 00:28:49.674 "state": "online", 00:28:49.674 "raid_level": "raid1", 00:28:49.674 "superblock": true, 00:28:49.674 "num_base_bdevs": 2, 00:28:49.674 "num_base_bdevs_discovered": 1, 00:28:49.674 "num_base_bdevs_operational": 1, 00:28:49.674 "base_bdevs_list": [ 00:28:49.674 { 00:28:49.674 "name": null, 00:28:49.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:49.674 "is_configured": false, 00:28:49.674 "data_offset": 2048, 00:28:49.674 "data_size": 63488 00:28:49.674 }, 00:28:49.674 { 00:28:49.674 "name": "BaseBdev2", 00:28:49.674 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:49.674 "is_configured": true, 00:28:49.674 "data_offset": 2048, 00:28:49.674 "data_size": 63488 00:28:49.674 } 00:28:49.674 ] 00:28:49.674 }' 00:28:49.674 04:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:49.674 04:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:50.239 04:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:50.497 [2024-07-23 04:23:59.051057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:50.497 [2024-07-23 04:23:59.051128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:50.497 [2024-07-23 04:23:59.051166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044180 00:28:50.497 [2024-07-23 04:23:59.051185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:50.497 [2024-07-23 04:23:59.051822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:50.497 [2024-07-23 04:23:59.051853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:50.497 [2024-07-23 04:23:59.051971] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:50.497 [2024-07-23 04:23:59.051996] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:50.497 [2024-07-23 04:23:59.052014] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:50.497 [2024-07-23 04:23:59.052045] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:50.497 [2024-07-23 04:23:59.077674] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00001fad0 00:28:50.497 spare 00:28:50.497 [2024-07-23 04:23:59.080033] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:50.497 04:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:28:51.430 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:51.430 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:51.430 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:51.430 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:51.430 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:51.430 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:51.430 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:51.687 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:51.687 "name": "raid_bdev1", 00:28:51.687 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:51.687 "strip_size_kb": 0, 00:28:51.687 "state": "online", 00:28:51.687 "raid_level": "raid1", 00:28:51.687 "superblock": true, 00:28:51.687 "num_base_bdevs": 2, 00:28:51.687 "num_base_bdevs_discovered": 2, 00:28:51.687 "num_base_bdevs_operational": 2, 00:28:51.687 "process": { 00:28:51.687 "type": "rebuild", 00:28:51.687 "target": "spare", 00:28:51.687 "progress": { 00:28:51.687 "blocks": 24576, 00:28:51.687 "percent": 38 00:28:51.687 } 00:28:51.687 }, 00:28:51.687 "base_bdevs_list": [ 00:28:51.687 { 00:28:51.687 "name": "spare", 00:28:51.687 "uuid": "478dc2c6-3bb2-5fad-a1a4-39366ce0ba78", 00:28:51.687 "is_configured": true, 00:28:51.687 "data_offset": 2048, 00:28:51.687 "data_size": 63488 00:28:51.687 }, 00:28:51.687 { 00:28:51.687 "name": "BaseBdev2", 00:28:51.687 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:51.687 "is_configured": true, 00:28:51.687 "data_offset": 2048, 00:28:51.687 "data_size": 63488 00:28:51.687 } 00:28:51.687 ] 00:28:51.687 }' 00:28:51.687 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:51.687 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:51.687 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:51.687 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:51.687 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:51.945 [2024-07-23 04:24:00.634614] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:51.945 [2024-07-23 04:24:00.693224] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:51.945 [2024-07-23 04:24:00.693288] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:51.945 [2024-07-23 04:24:00.693313] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:51.945 [2024-07-23 04:24:00.693326] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:52.202 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:52.202 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:52.202 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:52.202 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:52.202 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:52.202 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:52.202 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:52.202 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:52.202 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:52.202 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:52.202 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:52.202 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:52.459 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:52.459 "name": "raid_bdev1", 00:28:52.459 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:52.459 "strip_size_kb": 0, 00:28:52.459 "state": "online", 00:28:52.459 "raid_level": "raid1", 00:28:52.459 "superblock": true, 00:28:52.459 "num_base_bdevs": 2, 00:28:52.459 "num_base_bdevs_discovered": 1, 00:28:52.459 "num_base_bdevs_operational": 1, 00:28:52.459 "base_bdevs_list": [ 00:28:52.459 { 00:28:52.459 "name": null, 00:28:52.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:52.459 "is_configured": false, 00:28:52.460 "data_offset": 2048, 00:28:52.460 "data_size": 63488 00:28:52.460 }, 00:28:52.460 { 00:28:52.460 "name": "BaseBdev2", 00:28:52.460 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:52.460 "is_configured": true, 00:28:52.460 "data_offset": 2048, 00:28:52.460 "data_size": 63488 00:28:52.460 } 00:28:52.460 ] 00:28:52.460 }' 00:28:52.460 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:52.460 04:24:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:53.023 04:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:53.023 04:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:53.023 04:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:53.023 04:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:53.023 04:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:53.023 04:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.023 04:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.023 04:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:53.023 "name": "raid_bdev1", 00:28:53.023 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:53.023 "strip_size_kb": 0, 00:28:53.023 "state": "online", 00:28:53.023 "raid_level": "raid1", 00:28:53.023 "superblock": true, 00:28:53.023 "num_base_bdevs": 2, 00:28:53.023 "num_base_bdevs_discovered": 1, 00:28:53.023 "num_base_bdevs_operational": 1, 00:28:53.024 "base_bdevs_list": [ 00:28:53.024 { 00:28:53.024 "name": null, 00:28:53.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:53.024 "is_configured": false, 00:28:53.024 "data_offset": 2048, 00:28:53.024 "data_size": 63488 00:28:53.024 }, 00:28:53.024 { 00:28:53.024 "name": "BaseBdev2", 00:28:53.024 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:53.024 "is_configured": true, 00:28:53.024 "data_offset": 2048, 00:28:53.024 "data_size": 63488 00:28:53.024 } 00:28:53.024 ] 00:28:53.024 }' 00:28:53.024 04:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:53.281 04:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:53.281 04:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:53.281 04:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:53.281 04:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:53.538 04:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:53.538 [2024-07-23 04:24:02.311676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:53.538 [2024-07-23 04:24:02.311751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:53.538 [2024-07-23 04:24:02.311782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044780 00:28:53.538 [2024-07-23 04:24:02.311798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:53.538 [2024-07-23 04:24:02.312412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:53.538 [2024-07-23 04:24:02.312440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:53.538 [2024-07-23 04:24:02.312543] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:53.538 [2024-07-23 04:24:02.312564] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:53.538 [2024-07-23 04:24:02.312580] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:53.538 BaseBdev1 00:28:53.795 04:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:28:54.725 04:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:54.725 04:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:54.725 04:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:54.725 04:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:54.725 04:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:54.725 04:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:54.725 04:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:54.725 04:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:54.725 04:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:54.725 04:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:54.725 04:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:54.725 04:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:54.982 04:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:54.982 "name": "raid_bdev1", 00:28:54.982 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:54.982 "strip_size_kb": 0, 00:28:54.982 "state": "online", 00:28:54.982 "raid_level": "raid1", 00:28:54.982 "superblock": true, 00:28:54.982 "num_base_bdevs": 2, 00:28:54.982 "num_base_bdevs_discovered": 1, 00:28:54.982 "num_base_bdevs_operational": 1, 00:28:54.982 "base_bdevs_list": [ 00:28:54.982 { 00:28:54.982 "name": null, 00:28:54.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:54.982 "is_configured": false, 00:28:54.982 "data_offset": 2048, 00:28:54.982 "data_size": 63488 00:28:54.982 }, 00:28:54.982 { 00:28:54.982 "name": "BaseBdev2", 00:28:54.982 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:54.982 "is_configured": true, 00:28:54.982 "data_offset": 2048, 00:28:54.982 "data_size": 63488 00:28:54.982 } 00:28:54.982 ] 00:28:54.982 }' 00:28:54.982 04:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:54.982 04:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:55.545 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:55.545 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:55.545 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:55.545 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:55.545 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:55.545 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:55.545 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:55.802 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:55.802 "name": "raid_bdev1", 00:28:55.802 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:55.802 "strip_size_kb": 0, 00:28:55.802 "state": "online", 00:28:55.802 "raid_level": "raid1", 00:28:55.802 "superblock": true, 00:28:55.802 "num_base_bdevs": 2, 00:28:55.802 "num_base_bdevs_discovered": 1, 00:28:55.802 "num_base_bdevs_operational": 1, 00:28:55.802 "base_bdevs_list": [ 00:28:55.802 { 00:28:55.802 "name": null, 00:28:55.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:55.802 "is_configured": false, 00:28:55.802 "data_offset": 2048, 00:28:55.802 "data_size": 63488 00:28:55.802 }, 00:28:55.802 { 00:28:55.802 "name": "BaseBdev2", 00:28:55.802 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:55.802 "is_configured": true, 00:28:55.802 "data_offset": 2048, 00:28:55.802 "data_size": 63488 00:28:55.802 } 00:28:55.802 ] 00:28:55.802 }' 00:28:55.802 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:55.802 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:55.802 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:55.802 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:55.802 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:55.802 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:28:55.802 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:55.802 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:55.802 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:55.802 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:55.802 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:55.802 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:55.802 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:55.802 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:55.802 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:28:55.802 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:56.163 [2024-07-23 04:24:04.694543] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:56.164 [2024-07-23 04:24:04.694722] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:56.164 [2024-07-23 04:24:04.694744] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:56.164 request: 00:28:56.164 { 00:28:56.164 "base_bdev": "BaseBdev1", 00:28:56.164 "raid_bdev": "raid_bdev1", 00:28:56.164 "method": "bdev_raid_add_base_bdev", 00:28:56.164 "req_id": 1 00:28:56.164 } 00:28:56.164 Got JSON-RPC error response 00:28:56.164 response: 00:28:56.164 { 00:28:56.164 "code": -22, 00:28:56.164 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:56.164 } 00:28:56.164 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:28:56.164 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:56.164 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:56.164 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:56.164 04:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:28:57.096 04:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:57.096 04:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:57.096 04:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:57.096 04:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:57.096 04:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:57.096 04:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:57.096 04:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:57.096 04:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:57.096 04:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:57.096 04:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:57.096 04:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:57.096 04:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:57.352 04:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:57.352 "name": "raid_bdev1", 00:28:57.352 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:57.352 "strip_size_kb": 0, 00:28:57.352 "state": "online", 00:28:57.352 "raid_level": "raid1", 00:28:57.352 "superblock": true, 00:28:57.352 "num_base_bdevs": 2, 00:28:57.352 "num_base_bdevs_discovered": 1, 00:28:57.352 "num_base_bdevs_operational": 1, 00:28:57.352 "base_bdevs_list": [ 00:28:57.352 { 00:28:57.352 "name": null, 00:28:57.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:57.352 "is_configured": false, 00:28:57.352 "data_offset": 2048, 00:28:57.352 "data_size": 63488 00:28:57.352 }, 00:28:57.352 { 00:28:57.352 "name": "BaseBdev2", 00:28:57.352 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:57.352 "is_configured": true, 00:28:57.352 "data_offset": 2048, 00:28:57.352 "data_size": 63488 00:28:57.352 } 00:28:57.352 ] 00:28:57.352 }' 00:28:57.352 04:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:57.352 04:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:57.928 04:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:57.928 04:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:57.928 04:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:57.928 04:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:57.928 04:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:57.928 04:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:57.928 04:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:57.928 04:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:57.928 "name": "raid_bdev1", 00:28:57.928 "uuid": "f1178292-25c5-499e-ad5c-6f62ead7ab4f", 00:28:57.928 "strip_size_kb": 0, 00:28:57.928 "state": "online", 00:28:57.928 "raid_level": "raid1", 00:28:57.928 "superblock": true, 00:28:57.928 "num_base_bdevs": 2, 00:28:57.928 "num_base_bdevs_discovered": 1, 00:28:57.928 "num_base_bdevs_operational": 1, 00:28:57.928 "base_bdevs_list": [ 00:28:57.928 { 00:28:57.928 "name": null, 00:28:57.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:57.928 "is_configured": false, 00:28:57.928 "data_offset": 2048, 00:28:57.928 "data_size": 63488 00:28:57.928 }, 00:28:57.928 { 00:28:57.928 "name": "BaseBdev2", 00:28:57.928 "uuid": "d5f012ae-559f-5cba-91d3-e0e7bad7e2d6", 00:28:57.928 "is_configured": true, 00:28:57.928 "data_offset": 2048, 00:28:57.928 "data_size": 63488 00:28:57.928 } 00:28:57.928 ] 00:28:57.928 }' 00:28:57.928 04:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:57.928 04:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:58.186 04:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:58.186 04:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:58.186 04:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 2773769 00:28:58.186 04:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 2773769 ']' 00:28:58.186 04:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 2773769 00:28:58.186 04:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:28:58.186 04:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:58.186 04:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2773769 00:28:58.186 04:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:58.186 04:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:58.186 04:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2773769' 00:28:58.186 killing process with pid 2773769 00:28:58.186 04:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 2773769 00:28:58.186 Received shutdown signal, test time was about 26.785697 seconds 00:28:58.186 00:28:58.186 Latency(us) 00:28:58.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.186 =================================================================================================================== 00:28:58.186 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:58.186 [2024-07-23 04:24:06.810591] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:58.186 04:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 2773769 00:28:58.186 [2024-07-23 04:24:06.810746] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:58.186 [2024-07-23 04:24:06.810820] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:58.186 [2024-07-23 04:24:06.810836] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043b80 name raid_bdev1, state offline 00:28:58.443 [2024-07-23 04:24:07.056066] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:00.341 04:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:29:00.341 00:29:00.341 real 0m33.449s 00:29:00.341 user 0m50.281s 00:29:00.341 sys 0m4.598s 00:29:00.341 04:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:00.341 04:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:00.341 ************************************ 00:29:00.341 END TEST raid_rebuild_test_sb_io 00:29:00.341 ************************************ 00:29:00.341 04:24:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:29:00.341 04:24:08 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:29:00.341 04:24:08 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:29:00.341 04:24:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:29:00.341 04:24:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:00.341 04:24:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:00.341 ************************************ 00:29:00.341 START TEST raid_rebuild_test 00:29:00.341 ************************************ 00:29:00.341 04:24:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false false true 00:29:00.341 04:24:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:29:00.341 04:24:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:29:00.341 04:24:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:29:00.341 04:24:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:29:00.342 04:24:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=2780286 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 2780286 /var/tmp/spdk-raid.sock 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 2780286 ']' 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:00.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:00.342 04:24:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:00.342 [2024-07-23 04:24:09.110968] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:00.342 [2024-07-23 04:24:09.111100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2780286 ] 00:29:00.342 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:00.342 Zero copy mechanism will not be used. 00:29:00.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:00.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.601 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:00.601 [2024-07-23 04:24:09.325889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.859 [2024-07-23 04:24:09.619868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.426 [2024-07-23 04:24:09.971304] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:01.426 [2024-07-23 04:24:09.971340] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:01.426 04:24:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:01.426 04:24:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:29:01.426 04:24:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:01.426 04:24:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:01.684 BaseBdev1_malloc 00:29:01.684 04:24:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:01.942 [2024-07-23 04:24:10.647561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:01.942 [2024-07-23 04:24:10.647629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:01.942 [2024-07-23 04:24:10.647661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:29:01.942 [2024-07-23 04:24:10.647680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:01.942 [2024-07-23 04:24:10.650462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:01.942 [2024-07-23 04:24:10.650501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:01.942 BaseBdev1 00:29:01.942 04:24:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:01.942 04:24:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:02.200 BaseBdev2_malloc 00:29:02.200 04:24:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:02.458 [2024-07-23 04:24:11.153065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:02.458 [2024-07-23 04:24:11.153127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:02.458 [2024-07-23 04:24:11.153164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:29:02.458 [2024-07-23 04:24:11.153189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:02.458 [2024-07-23 04:24:11.155956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:02.458 [2024-07-23 04:24:11.155994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:02.458 BaseBdev2 00:29:02.458 04:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:02.458 04:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:02.716 BaseBdev3_malloc 00:29:02.716 04:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:02.973 [2024-07-23 04:24:11.660774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:02.973 [2024-07-23 04:24:11.660845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:02.973 [2024-07-23 04:24:11.660876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:29:02.973 [2024-07-23 04:24:11.660894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:02.973 [2024-07-23 04:24:11.663627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:02.973 [2024-07-23 04:24:11.663665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:02.973 BaseBdev3 00:29:02.973 04:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:02.973 04:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:03.231 BaseBdev4_malloc 00:29:03.231 04:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:03.489 [2024-07-23 04:24:12.156872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:03.489 [2024-07-23 04:24:12.156942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:03.489 [2024-07-23 04:24:12.156972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:29:03.489 [2024-07-23 04:24:12.156990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:03.489 [2024-07-23 04:24:12.159764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:03.489 [2024-07-23 04:24:12.159801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:03.489 BaseBdev4 00:29:03.489 04:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:03.748 spare_malloc 00:29:03.748 04:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:04.006 spare_delay 00:29:04.006 04:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:04.264 [2024-07-23 04:24:12.871856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:04.264 [2024-07-23 04:24:12.871915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:04.264 [2024-07-23 04:24:12.871942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:29:04.264 [2024-07-23 04:24:12.871960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:04.264 [2024-07-23 04:24:12.874715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:04.264 [2024-07-23 04:24:12.874753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:04.264 spare 00:29:04.264 04:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:04.522 [2024-07-23 04:24:13.096514] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:04.522 [2024-07-23 04:24:13.098836] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:04.522 [2024-07-23 04:24:13.098909] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:04.522 [2024-07-23 04:24:13.098976] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:04.522 [2024-07-23 04:24:13.099085] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:29:04.522 [2024-07-23 04:24:13.099102] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:29:04.522 [2024-07-23 04:24:13.099485] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:29:04.522 [2024-07-23 04:24:13.099751] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:29:04.522 [2024-07-23 04:24:13.099770] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:29:04.522 [2024-07-23 04:24:13.099992] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:04.522 04:24:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:04.522 04:24:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:04.522 04:24:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:04.522 04:24:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:04.522 04:24:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:04.522 04:24:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:04.522 04:24:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:04.522 04:24:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:04.522 04:24:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:04.522 04:24:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:04.522 04:24:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:04.522 04:24:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:04.779 04:24:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:04.779 "name": "raid_bdev1", 00:29:04.779 "uuid": "9942a355-8903-44cf-90dc-da041d795d49", 00:29:04.779 "strip_size_kb": 0, 00:29:04.779 "state": "online", 00:29:04.779 "raid_level": "raid1", 00:29:04.779 "superblock": false, 00:29:04.779 "num_base_bdevs": 4, 00:29:04.779 "num_base_bdevs_discovered": 4, 00:29:04.779 "num_base_bdevs_operational": 4, 00:29:04.779 "base_bdevs_list": [ 00:29:04.779 { 00:29:04.779 "name": "BaseBdev1", 00:29:04.779 "uuid": "08c9986b-0570-597a-88c5-b8314ae7f173", 00:29:04.779 "is_configured": true, 00:29:04.779 "data_offset": 0, 00:29:04.779 "data_size": 65536 00:29:04.779 }, 00:29:04.779 { 00:29:04.779 "name": "BaseBdev2", 00:29:04.779 "uuid": "64880278-74b2-5142-8dea-c45459a1e425", 00:29:04.779 "is_configured": true, 00:29:04.779 "data_offset": 0, 00:29:04.779 "data_size": 65536 00:29:04.779 }, 00:29:04.779 { 00:29:04.779 "name": "BaseBdev3", 00:29:04.779 "uuid": "692b46f6-afff-5ad3-906d-b269a3112ab9", 00:29:04.779 "is_configured": true, 00:29:04.779 "data_offset": 0, 00:29:04.779 "data_size": 65536 00:29:04.779 }, 00:29:04.779 { 00:29:04.779 "name": "BaseBdev4", 00:29:04.779 "uuid": "40af45b0-59ee-5990-9a14-419db7241a32", 00:29:04.779 "is_configured": true, 00:29:04.779 "data_offset": 0, 00:29:04.779 "data_size": 65536 00:29:04.779 } 00:29:04.779 ] 00:29:04.779 }' 00:29:04.779 04:24:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:04.779 04:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:05.345 04:24:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:05.345 04:24:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:05.345 [2024-07-23 04:24:14.123646] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:05.604 04:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:29:05.604 04:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:05.604 04:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:05.604 04:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:29:05.604 04:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:29:05.604 04:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:29:05.604 04:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:29:05.604 04:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:05.604 04:24:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:05.604 04:24:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:05.604 04:24:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:05.604 04:24:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:05.604 04:24:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:05.604 04:24:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:05.604 04:24:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:05.604 04:24:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:05.604 04:24:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:05.994 [2024-07-23 04:24:14.584570] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:29:05.994 /dev/nbd0 00:29:05.994 04:24:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:05.994 04:24:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:05.994 04:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:05.994 04:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:29:05.994 04:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:05.994 04:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:05.994 04:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:05.994 04:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:29:05.994 04:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:05.994 04:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:05.994 04:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:05.994 1+0 records in 00:29:05.994 1+0 records out 00:29:05.994 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270521 s, 15.1 MB/s 00:29:05.994 04:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:05.994 04:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:29:05.994 04:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:05.994 04:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:05.994 04:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:29:05.994 04:24:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:05.994 04:24:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:05.994 04:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:29:05.994 04:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:29:05.994 04:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:29:14.106 65536+0 records in 00:29:14.106 65536+0 records out 00:29:14.106 33554432 bytes (34 MB, 32 MiB) copied, 7.83301 s, 4.3 MB/s 00:29:14.106 04:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:14.106 04:24:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:14.106 04:24:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:14.106 04:24:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:14.106 04:24:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:14.106 04:24:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:14.106 04:24:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:14.106 [2024-07-23 04:24:22.731303] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:14.106 04:24:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:14.106 04:24:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:14.106 04:24:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:14.106 04:24:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:14.106 04:24:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:14.106 04:24:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:14.106 04:24:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:14.107 04:24:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:14.107 04:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:14.366 [2024-07-23 04:24:22.955618] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:14.366 04:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:14.366 04:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:14.366 04:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:14.366 04:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:14.366 04:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:14.366 04:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:14.366 04:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:14.366 04:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:14.366 04:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:14.366 04:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:14.366 04:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:14.366 04:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:14.625 04:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:14.625 "name": "raid_bdev1", 00:29:14.625 "uuid": "9942a355-8903-44cf-90dc-da041d795d49", 00:29:14.625 "strip_size_kb": 0, 00:29:14.625 "state": "online", 00:29:14.625 "raid_level": "raid1", 00:29:14.625 "superblock": false, 00:29:14.625 "num_base_bdevs": 4, 00:29:14.625 "num_base_bdevs_discovered": 3, 00:29:14.625 "num_base_bdevs_operational": 3, 00:29:14.625 "base_bdevs_list": [ 00:29:14.625 { 00:29:14.625 "name": null, 00:29:14.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:14.625 "is_configured": false, 00:29:14.625 "data_offset": 0, 00:29:14.625 "data_size": 65536 00:29:14.625 }, 00:29:14.625 { 00:29:14.625 "name": "BaseBdev2", 00:29:14.625 "uuid": "64880278-74b2-5142-8dea-c45459a1e425", 00:29:14.625 "is_configured": true, 00:29:14.625 "data_offset": 0, 00:29:14.625 "data_size": 65536 00:29:14.625 }, 00:29:14.625 { 00:29:14.625 "name": "BaseBdev3", 00:29:14.625 "uuid": "692b46f6-afff-5ad3-906d-b269a3112ab9", 00:29:14.625 "is_configured": true, 00:29:14.625 "data_offset": 0, 00:29:14.625 "data_size": 65536 00:29:14.625 }, 00:29:14.625 { 00:29:14.625 "name": "BaseBdev4", 00:29:14.625 "uuid": "40af45b0-59ee-5990-9a14-419db7241a32", 00:29:14.625 "is_configured": true, 00:29:14.625 "data_offset": 0, 00:29:14.625 "data_size": 65536 00:29:14.625 } 00:29:14.625 ] 00:29:14.625 }' 00:29:14.625 04:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:14.625 04:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.192 04:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:15.452 [2024-07-23 04:24:23.986443] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:15.452 [2024-07-23 04:24:24.011722] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d145a0 00:29:15.452 [2024-07-23 04:24:24.014107] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:15.452 04:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:29:16.389 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:16.389 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:16.389 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:16.389 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:16.389 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:16.389 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:16.389 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:16.648 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:16.648 "name": "raid_bdev1", 00:29:16.648 "uuid": "9942a355-8903-44cf-90dc-da041d795d49", 00:29:16.649 "strip_size_kb": 0, 00:29:16.649 "state": "online", 00:29:16.649 "raid_level": "raid1", 00:29:16.649 "superblock": false, 00:29:16.649 "num_base_bdevs": 4, 00:29:16.649 "num_base_bdevs_discovered": 4, 00:29:16.649 "num_base_bdevs_operational": 4, 00:29:16.649 "process": { 00:29:16.649 "type": "rebuild", 00:29:16.649 "target": "spare", 00:29:16.649 "progress": { 00:29:16.649 "blocks": 24576, 00:29:16.649 "percent": 37 00:29:16.649 } 00:29:16.649 }, 00:29:16.649 "base_bdevs_list": [ 00:29:16.649 { 00:29:16.649 "name": "spare", 00:29:16.649 "uuid": "c47bdfaa-1a78-5876-a40d-6ccc1775bbf5", 00:29:16.649 "is_configured": true, 00:29:16.649 "data_offset": 0, 00:29:16.649 "data_size": 65536 00:29:16.649 }, 00:29:16.649 { 00:29:16.649 "name": "BaseBdev2", 00:29:16.649 "uuid": "64880278-74b2-5142-8dea-c45459a1e425", 00:29:16.649 "is_configured": true, 00:29:16.649 "data_offset": 0, 00:29:16.649 "data_size": 65536 00:29:16.649 }, 00:29:16.649 { 00:29:16.649 "name": "BaseBdev3", 00:29:16.649 "uuid": "692b46f6-afff-5ad3-906d-b269a3112ab9", 00:29:16.649 "is_configured": true, 00:29:16.649 "data_offset": 0, 00:29:16.649 "data_size": 65536 00:29:16.649 }, 00:29:16.649 { 00:29:16.649 "name": "BaseBdev4", 00:29:16.649 "uuid": "40af45b0-59ee-5990-9a14-419db7241a32", 00:29:16.649 "is_configured": true, 00:29:16.649 "data_offset": 0, 00:29:16.649 "data_size": 65536 00:29:16.649 } 00:29:16.649 ] 00:29:16.649 }' 00:29:16.649 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:16.649 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:16.649 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:16.649 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:16.649 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:16.908 [2024-07-23 04:24:25.547053] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:16.908 [2024-07-23 04:24:25.627128] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:16.908 [2024-07-23 04:24:25.627227] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:16.908 [2024-07-23 04:24:25.627253] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:16.908 [2024-07-23 04:24:25.627272] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:16.908 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:16.908 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:16.908 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:16.908 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:16.908 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:16.908 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:16.908 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:16.908 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:16.908 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:16.908 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:16.908 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:16.908 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:17.167 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:17.167 "name": "raid_bdev1", 00:29:17.167 "uuid": "9942a355-8903-44cf-90dc-da041d795d49", 00:29:17.167 "strip_size_kb": 0, 00:29:17.167 "state": "online", 00:29:17.167 "raid_level": "raid1", 00:29:17.167 "superblock": false, 00:29:17.167 "num_base_bdevs": 4, 00:29:17.167 "num_base_bdevs_discovered": 3, 00:29:17.167 "num_base_bdevs_operational": 3, 00:29:17.167 "base_bdevs_list": [ 00:29:17.167 { 00:29:17.167 "name": null, 00:29:17.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.167 "is_configured": false, 00:29:17.167 "data_offset": 0, 00:29:17.167 "data_size": 65536 00:29:17.167 }, 00:29:17.167 { 00:29:17.167 "name": "BaseBdev2", 00:29:17.167 "uuid": "64880278-74b2-5142-8dea-c45459a1e425", 00:29:17.167 "is_configured": true, 00:29:17.167 "data_offset": 0, 00:29:17.167 "data_size": 65536 00:29:17.167 }, 00:29:17.167 { 00:29:17.167 "name": "BaseBdev3", 00:29:17.167 "uuid": "692b46f6-afff-5ad3-906d-b269a3112ab9", 00:29:17.167 "is_configured": true, 00:29:17.167 "data_offset": 0, 00:29:17.167 "data_size": 65536 00:29:17.167 }, 00:29:17.167 { 00:29:17.167 "name": "BaseBdev4", 00:29:17.167 "uuid": "40af45b0-59ee-5990-9a14-419db7241a32", 00:29:17.167 "is_configured": true, 00:29:17.167 "data_offset": 0, 00:29:17.167 "data_size": 65536 00:29:17.167 } 00:29:17.167 ] 00:29:17.167 }' 00:29:17.167 04:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:17.167 04:24:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.735 04:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:17.735 04:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:17.735 04:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:17.735 04:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:17.735 04:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:17.735 04:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:17.735 04:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:17.994 04:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:17.994 "name": "raid_bdev1", 00:29:17.994 "uuid": "9942a355-8903-44cf-90dc-da041d795d49", 00:29:17.994 "strip_size_kb": 0, 00:29:17.994 "state": "online", 00:29:17.994 "raid_level": "raid1", 00:29:17.994 "superblock": false, 00:29:17.994 "num_base_bdevs": 4, 00:29:17.994 "num_base_bdevs_discovered": 3, 00:29:17.994 "num_base_bdevs_operational": 3, 00:29:17.994 "base_bdevs_list": [ 00:29:17.994 { 00:29:17.994 "name": null, 00:29:17.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.994 "is_configured": false, 00:29:17.994 "data_offset": 0, 00:29:17.994 "data_size": 65536 00:29:17.994 }, 00:29:17.994 { 00:29:17.994 "name": "BaseBdev2", 00:29:17.994 "uuid": "64880278-74b2-5142-8dea-c45459a1e425", 00:29:17.994 "is_configured": true, 00:29:17.994 "data_offset": 0, 00:29:17.994 "data_size": 65536 00:29:17.994 }, 00:29:17.994 { 00:29:17.994 "name": "BaseBdev3", 00:29:17.995 "uuid": "692b46f6-afff-5ad3-906d-b269a3112ab9", 00:29:17.995 "is_configured": true, 00:29:17.995 "data_offset": 0, 00:29:17.995 "data_size": 65536 00:29:17.995 }, 00:29:17.995 { 00:29:17.995 "name": "BaseBdev4", 00:29:17.995 "uuid": "40af45b0-59ee-5990-9a14-419db7241a32", 00:29:17.995 "is_configured": true, 00:29:17.995 "data_offset": 0, 00:29:17.995 "data_size": 65536 00:29:17.995 } 00:29:17.995 ] 00:29:17.995 }' 00:29:17.995 04:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:17.995 04:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:17.995 04:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:18.254 04:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:18.254 04:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:18.254 [2024-07-23 04:24:26.995004] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:18.254 [2024-07-23 04:24:27.015999] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d14670 00:29:18.254 [2024-07-23 04:24:27.018384] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:18.254 04:24:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:19.265 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:19.265 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:19.265 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:19.265 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:19.265 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:19.265 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:19.265 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:19.524 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:19.524 "name": "raid_bdev1", 00:29:19.524 "uuid": "9942a355-8903-44cf-90dc-da041d795d49", 00:29:19.524 "strip_size_kb": 0, 00:29:19.524 "state": "online", 00:29:19.524 "raid_level": "raid1", 00:29:19.524 "superblock": false, 00:29:19.524 "num_base_bdevs": 4, 00:29:19.524 "num_base_bdevs_discovered": 4, 00:29:19.524 "num_base_bdevs_operational": 4, 00:29:19.524 "process": { 00:29:19.524 "type": "rebuild", 00:29:19.524 "target": "spare", 00:29:19.524 "progress": { 00:29:19.524 "blocks": 24576, 00:29:19.524 "percent": 37 00:29:19.524 } 00:29:19.524 }, 00:29:19.524 "base_bdevs_list": [ 00:29:19.524 { 00:29:19.524 "name": "spare", 00:29:19.524 "uuid": "c47bdfaa-1a78-5876-a40d-6ccc1775bbf5", 00:29:19.524 "is_configured": true, 00:29:19.524 "data_offset": 0, 00:29:19.524 "data_size": 65536 00:29:19.524 }, 00:29:19.524 { 00:29:19.524 "name": "BaseBdev2", 00:29:19.524 "uuid": "64880278-74b2-5142-8dea-c45459a1e425", 00:29:19.524 "is_configured": true, 00:29:19.524 "data_offset": 0, 00:29:19.524 "data_size": 65536 00:29:19.524 }, 00:29:19.524 { 00:29:19.524 "name": "BaseBdev3", 00:29:19.524 "uuid": "692b46f6-afff-5ad3-906d-b269a3112ab9", 00:29:19.524 "is_configured": true, 00:29:19.524 "data_offset": 0, 00:29:19.524 "data_size": 65536 00:29:19.524 }, 00:29:19.524 { 00:29:19.524 "name": "BaseBdev4", 00:29:19.524 "uuid": "40af45b0-59ee-5990-9a14-419db7241a32", 00:29:19.524 "is_configured": true, 00:29:19.524 "data_offset": 0, 00:29:19.524 "data_size": 65536 00:29:19.524 } 00:29:19.524 ] 00:29:19.524 }' 00:29:19.524 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:19.783 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:19.783 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:19.783 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:19.783 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:29:19.783 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:29:19.783 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:29:19.783 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:29:19.783 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:29:19.783 [2024-07-23 04:24:28.564292] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:20.042 [2024-07-23 04:24:28.631397] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d14670 00:29:20.042 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:29:20.042 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:29:20.042 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:20.042 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:20.042 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:20.042 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:20.042 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:20.042 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:20.042 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:20.301 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:20.301 "name": "raid_bdev1", 00:29:20.301 "uuid": "9942a355-8903-44cf-90dc-da041d795d49", 00:29:20.301 "strip_size_kb": 0, 00:29:20.301 "state": "online", 00:29:20.301 "raid_level": "raid1", 00:29:20.301 "superblock": false, 00:29:20.301 "num_base_bdevs": 4, 00:29:20.301 "num_base_bdevs_discovered": 3, 00:29:20.301 "num_base_bdevs_operational": 3, 00:29:20.301 "process": { 00:29:20.301 "type": "rebuild", 00:29:20.301 "target": "spare", 00:29:20.301 "progress": { 00:29:20.301 "blocks": 36864, 00:29:20.301 "percent": 56 00:29:20.301 } 00:29:20.301 }, 00:29:20.301 "base_bdevs_list": [ 00:29:20.301 { 00:29:20.301 "name": "spare", 00:29:20.301 "uuid": "c47bdfaa-1a78-5876-a40d-6ccc1775bbf5", 00:29:20.301 "is_configured": true, 00:29:20.301 "data_offset": 0, 00:29:20.301 "data_size": 65536 00:29:20.301 }, 00:29:20.301 { 00:29:20.301 "name": null, 00:29:20.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:20.301 "is_configured": false, 00:29:20.301 "data_offset": 0, 00:29:20.301 "data_size": 65536 00:29:20.301 }, 00:29:20.301 { 00:29:20.301 "name": "BaseBdev3", 00:29:20.301 "uuid": "692b46f6-afff-5ad3-906d-b269a3112ab9", 00:29:20.301 "is_configured": true, 00:29:20.301 "data_offset": 0, 00:29:20.301 "data_size": 65536 00:29:20.301 }, 00:29:20.301 { 00:29:20.301 "name": "BaseBdev4", 00:29:20.301 "uuid": "40af45b0-59ee-5990-9a14-419db7241a32", 00:29:20.301 "is_configured": true, 00:29:20.301 "data_offset": 0, 00:29:20.301 "data_size": 65536 00:29:20.301 } 00:29:20.301 ] 00:29:20.301 }' 00:29:20.301 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:20.301 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:20.301 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:20.301 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:20.301 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=954 00:29:20.301 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:20.301 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:20.301 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:20.301 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:20.301 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:20.301 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:20.301 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:20.301 04:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:20.560 04:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:20.560 "name": "raid_bdev1", 00:29:20.560 "uuid": "9942a355-8903-44cf-90dc-da041d795d49", 00:29:20.560 "strip_size_kb": 0, 00:29:20.560 "state": "online", 00:29:20.560 "raid_level": "raid1", 00:29:20.560 "superblock": false, 00:29:20.560 "num_base_bdevs": 4, 00:29:20.560 "num_base_bdevs_discovered": 3, 00:29:20.560 "num_base_bdevs_operational": 3, 00:29:20.560 "process": { 00:29:20.560 "type": "rebuild", 00:29:20.560 "target": "spare", 00:29:20.560 "progress": { 00:29:20.560 "blocks": 43008, 00:29:20.560 "percent": 65 00:29:20.560 } 00:29:20.560 }, 00:29:20.560 "base_bdevs_list": [ 00:29:20.560 { 00:29:20.560 "name": "spare", 00:29:20.561 "uuid": "c47bdfaa-1a78-5876-a40d-6ccc1775bbf5", 00:29:20.561 "is_configured": true, 00:29:20.561 "data_offset": 0, 00:29:20.561 "data_size": 65536 00:29:20.561 }, 00:29:20.561 { 00:29:20.561 "name": null, 00:29:20.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:20.561 "is_configured": false, 00:29:20.561 "data_offset": 0, 00:29:20.561 "data_size": 65536 00:29:20.561 }, 00:29:20.561 { 00:29:20.561 "name": "BaseBdev3", 00:29:20.561 "uuid": "692b46f6-afff-5ad3-906d-b269a3112ab9", 00:29:20.561 "is_configured": true, 00:29:20.561 "data_offset": 0, 00:29:20.561 "data_size": 65536 00:29:20.561 }, 00:29:20.561 { 00:29:20.561 "name": "BaseBdev4", 00:29:20.561 "uuid": "40af45b0-59ee-5990-9a14-419db7241a32", 00:29:20.561 "is_configured": true, 00:29:20.561 "data_offset": 0, 00:29:20.561 "data_size": 65536 00:29:20.561 } 00:29:20.561 ] 00:29:20.561 }' 00:29:20.561 04:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:20.561 04:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:20.561 04:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:20.561 04:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:20.561 04:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:21.498 [2024-07-23 04:24:30.244423] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:21.498 [2024-07-23 04:24:30.244502] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:21.498 [2024-07-23 04:24:30.244557] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:21.758 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:21.758 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:21.758 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:21.758 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:21.758 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:21.758 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:21.758 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:21.758 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:21.758 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:21.758 "name": "raid_bdev1", 00:29:21.758 "uuid": "9942a355-8903-44cf-90dc-da041d795d49", 00:29:21.758 "strip_size_kb": 0, 00:29:21.758 "state": "online", 00:29:21.758 "raid_level": "raid1", 00:29:21.758 "superblock": false, 00:29:21.758 "num_base_bdevs": 4, 00:29:21.758 "num_base_bdevs_discovered": 3, 00:29:21.758 "num_base_bdevs_operational": 3, 00:29:21.758 "base_bdevs_list": [ 00:29:21.758 { 00:29:21.758 "name": "spare", 00:29:21.758 "uuid": "c47bdfaa-1a78-5876-a40d-6ccc1775bbf5", 00:29:21.758 "is_configured": true, 00:29:21.758 "data_offset": 0, 00:29:21.758 "data_size": 65536 00:29:21.758 }, 00:29:21.758 { 00:29:21.758 "name": null, 00:29:21.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:21.758 "is_configured": false, 00:29:21.758 "data_offset": 0, 00:29:21.758 "data_size": 65536 00:29:21.758 }, 00:29:21.758 { 00:29:21.758 "name": "BaseBdev3", 00:29:21.758 "uuid": "692b46f6-afff-5ad3-906d-b269a3112ab9", 00:29:21.758 "is_configured": true, 00:29:21.758 "data_offset": 0, 00:29:21.758 "data_size": 65536 00:29:21.758 }, 00:29:21.758 { 00:29:21.758 "name": "BaseBdev4", 00:29:21.758 "uuid": "40af45b0-59ee-5990-9a14-419db7241a32", 00:29:21.758 "is_configured": true, 00:29:21.758 "data_offset": 0, 00:29:21.758 "data_size": 65536 00:29:21.758 } 00:29:21.758 ] 00:29:21.758 }' 00:29:21.758 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:22.017 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:22.017 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:22.017 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:22.017 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:29:22.017 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:22.017 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:22.017 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:22.017 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:22.017 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:22.017 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:22.017 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:22.277 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:22.277 "name": "raid_bdev1", 00:29:22.277 "uuid": "9942a355-8903-44cf-90dc-da041d795d49", 00:29:22.277 "strip_size_kb": 0, 00:29:22.277 "state": "online", 00:29:22.277 "raid_level": "raid1", 00:29:22.277 "superblock": false, 00:29:22.277 "num_base_bdevs": 4, 00:29:22.277 "num_base_bdevs_discovered": 3, 00:29:22.277 "num_base_bdevs_operational": 3, 00:29:22.277 "base_bdevs_list": [ 00:29:22.277 { 00:29:22.277 "name": "spare", 00:29:22.277 "uuid": "c47bdfaa-1a78-5876-a40d-6ccc1775bbf5", 00:29:22.277 "is_configured": true, 00:29:22.277 "data_offset": 0, 00:29:22.277 "data_size": 65536 00:29:22.277 }, 00:29:22.277 { 00:29:22.277 "name": null, 00:29:22.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.277 "is_configured": false, 00:29:22.277 "data_offset": 0, 00:29:22.277 "data_size": 65536 00:29:22.277 }, 00:29:22.277 { 00:29:22.277 "name": "BaseBdev3", 00:29:22.277 "uuid": "692b46f6-afff-5ad3-906d-b269a3112ab9", 00:29:22.277 "is_configured": true, 00:29:22.277 "data_offset": 0, 00:29:22.277 "data_size": 65536 00:29:22.277 }, 00:29:22.277 { 00:29:22.277 "name": "BaseBdev4", 00:29:22.277 "uuid": "40af45b0-59ee-5990-9a14-419db7241a32", 00:29:22.277 "is_configured": true, 00:29:22.277 "data_offset": 0, 00:29:22.277 "data_size": 65536 00:29:22.277 } 00:29:22.277 ] 00:29:22.277 }' 00:29:22.277 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:22.277 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:22.277 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:22.277 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:22.277 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:22.277 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:22.277 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:22.277 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:22.277 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:22.277 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:22.277 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:22.277 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:22.277 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:22.277 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:22.277 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:22.277 04:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:22.536 04:24:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:22.536 "name": "raid_bdev1", 00:29:22.536 "uuid": "9942a355-8903-44cf-90dc-da041d795d49", 00:29:22.536 "strip_size_kb": 0, 00:29:22.536 "state": "online", 00:29:22.536 "raid_level": "raid1", 00:29:22.536 "superblock": false, 00:29:22.536 "num_base_bdevs": 4, 00:29:22.536 "num_base_bdevs_discovered": 3, 00:29:22.536 "num_base_bdevs_operational": 3, 00:29:22.536 "base_bdevs_list": [ 00:29:22.536 { 00:29:22.536 "name": "spare", 00:29:22.536 "uuid": "c47bdfaa-1a78-5876-a40d-6ccc1775bbf5", 00:29:22.536 "is_configured": true, 00:29:22.536 "data_offset": 0, 00:29:22.536 "data_size": 65536 00:29:22.536 }, 00:29:22.536 { 00:29:22.536 "name": null, 00:29:22.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.536 "is_configured": false, 00:29:22.536 "data_offset": 0, 00:29:22.536 "data_size": 65536 00:29:22.536 }, 00:29:22.536 { 00:29:22.536 "name": "BaseBdev3", 00:29:22.536 "uuid": "692b46f6-afff-5ad3-906d-b269a3112ab9", 00:29:22.536 "is_configured": true, 00:29:22.536 "data_offset": 0, 00:29:22.536 "data_size": 65536 00:29:22.536 }, 00:29:22.536 { 00:29:22.536 "name": "BaseBdev4", 00:29:22.536 "uuid": "40af45b0-59ee-5990-9a14-419db7241a32", 00:29:22.536 "is_configured": true, 00:29:22.536 "data_offset": 0, 00:29:22.536 "data_size": 65536 00:29:22.536 } 00:29:22.536 ] 00:29:22.536 }' 00:29:22.536 04:24:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:22.536 04:24:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:23.104 04:24:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:23.454 [2024-07-23 04:24:31.944278] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:23.454 [2024-07-23 04:24:31.944317] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:23.454 [2024-07-23 04:24:31.944404] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:23.454 [2024-07-23 04:24:31.944504] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:23.454 [2024-07-23 04:24:31.944521] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:29:23.454 04:24:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:23.454 04:24:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:29:23.454 04:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:29:23.454 04:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:29:23.454 04:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:29:23.454 04:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:23.454 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:23.454 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:23.454 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:23.454 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:23.454 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:23.454 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:23.454 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:23.454 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:23.454 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:23.713 /dev/nbd0 00:29:23.713 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:23.713 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:23.713 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:23.713 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:29:23.713 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:23.713 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:23.713 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:23.713 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:29:23.713 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:23.713 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:23.713 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:23.713 1+0 records in 00:29:23.713 1+0 records out 00:29:23.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024091 s, 17.0 MB/s 00:29:23.713 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:23.713 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:29:23.713 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:23.713 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:23.713 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:29:23.713 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:23.713 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:23.713 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:29:23.973 /dev/nbd1 00:29:23.973 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:23.973 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:23.973 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:29:23.973 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:29:23.973 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:23.973 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:23.973 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:29:23.973 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:29:23.973 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:23.973 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:23.973 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:23.973 1+0 records in 00:29:23.973 1+0 records out 00:29:23.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327469 s, 12.5 MB/s 00:29:23.973 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:23.973 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:29:23.973 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:23.973 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:23.973 04:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:29:23.973 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:23.973 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:23.973 04:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:24.232 04:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:29:24.232 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:24.232 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:24.232 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:24.232 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:24.232 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:24.232 04:24:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:24.492 04:24:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:24.492 04:24:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:24.492 04:24:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:24.492 04:24:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:24.492 04:24:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:24.492 04:24:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:24.492 04:24:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:24.492 04:24:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:24.492 04:24:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:24.492 04:24:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:24.751 04:24:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:24.751 04:24:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:24.751 04:24:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:24.751 04:24:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:24.751 04:24:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:24.751 04:24:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:24.751 04:24:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:24.751 04:24:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:24.751 04:24:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:29:24.751 04:24:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 2780286 00:29:24.751 04:24:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 2780286 ']' 00:29:24.751 04:24:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 2780286 00:29:24.751 04:24:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:29:24.751 04:24:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:24.751 04:24:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2780286 00:29:24.751 04:24:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:24.751 04:24:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:24.751 04:24:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2780286' 00:29:24.751 killing process with pid 2780286 00:29:24.751 04:24:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 2780286 00:29:24.751 Received shutdown signal, test time was about 60.000000 seconds 00:29:24.751 00:29:24.751 Latency(us) 00:29:24.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.751 =================================================================================================================== 00:29:24.751 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:24.751 [2024-07-23 04:24:33.419810] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:24.751 04:24:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 2780286 00:29:25.318 [2024-07-23 04:24:33.992669] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:29:27.225 00:29:27.225 real 0m26.746s 00:29:27.225 user 0m34.284s 00:29:27.225 sys 0m5.330s 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:27.225 ************************************ 00:29:27.225 END TEST raid_rebuild_test 00:29:27.225 ************************************ 00:29:27.225 04:24:35 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:29:27.225 04:24:35 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:29:27.225 04:24:35 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:29:27.225 04:24:35 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:27.225 04:24:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:27.225 ************************************ 00:29:27.225 START TEST raid_rebuild_test_sb 00:29:27.225 ************************************ 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true false true 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=2784890 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 2784890 /var/tmp/spdk-raid.sock 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2784890 ']' 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:27.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:27.225 04:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:27.226 [2024-07-23 04:24:35.897449] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:27.226 [2024-07-23 04:24:35.897552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2784890 ] 00:29:27.226 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:27.226 Zero copy mechanism will not be used. 00:29:27.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.226 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:27.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.226 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:27.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.226 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:27.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.226 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:27.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.226 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:27.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.226 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:27.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.226 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:27.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.226 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:27.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.226 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:27.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.226 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:27.226 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.226 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:27.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.485 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:27.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.485 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:27.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.485 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:27.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.485 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:27.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.485 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:27.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.485 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:27.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.485 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:27.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.485 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:27.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.485 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:27.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.485 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:27.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.486 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:27.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.486 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:27.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.486 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:27.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.486 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:27.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.486 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:27.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.486 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:27.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.486 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:27.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.486 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:27.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.486 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:27.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.486 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:27.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.486 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:27.486 [2024-07-23 04:24:36.095771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.745 [2024-07-23 04:24:36.361705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.004 [2024-07-23 04:24:36.690920] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:28.004 [2024-07-23 04:24:36.690968] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:28.572 04:24:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:28.572 04:24:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:29:28.572 04:24:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:28.572 04:24:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:29.138 BaseBdev1_malloc 00:29:29.138 04:24:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:29.396 [2024-07-23 04:24:38.112573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:29.396 [2024-07-23 04:24:38.112639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:29.396 [2024-07-23 04:24:38.112669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:29:29.396 [2024-07-23 04:24:38.112688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:29.396 [2024-07-23 04:24:38.115451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:29.396 [2024-07-23 04:24:38.115488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:29.396 BaseBdev1 00:29:29.396 04:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:29.396 04:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:29.654 BaseBdev2_malloc 00:29:29.654 04:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:30.221 [2024-07-23 04:24:38.901016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:30.221 [2024-07-23 04:24:38.901081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:30.221 [2024-07-23 04:24:38.901107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:29:30.221 [2024-07-23 04:24:38.901128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:30.221 [2024-07-23 04:24:38.903882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:30.221 [2024-07-23 04:24:38.903918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:30.221 BaseBdev2 00:29:30.221 04:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:30.221 04:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:30.479 BaseBdev3_malloc 00:29:30.479 04:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:30.737 [2024-07-23 04:24:39.404982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:30.737 [2024-07-23 04:24:39.405052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:30.737 [2024-07-23 04:24:39.405082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:29:30.737 [2024-07-23 04:24:39.405100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:30.737 [2024-07-23 04:24:39.407783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:30.737 [2024-07-23 04:24:39.407820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:30.737 BaseBdev3 00:29:30.737 04:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:30.737 04:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:31.303 BaseBdev4_malloc 00:29:31.303 04:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:31.560 [2024-07-23 04:24:40.189451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:31.560 [2024-07-23 04:24:40.189525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:31.560 [2024-07-23 04:24:40.189554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:29:31.561 [2024-07-23 04:24:40.189571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:31.561 [2024-07-23 04:24:40.192349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:31.561 [2024-07-23 04:24:40.192386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:31.561 BaseBdev4 00:29:31.561 04:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:32.125 spare_malloc 00:29:32.125 04:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:32.383 spare_delay 00:29:32.383 04:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:32.949 [2024-07-23 04:24:41.471342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:32.949 [2024-07-23 04:24:41.471405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:32.949 [2024-07-23 04:24:41.471432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:29:32.949 [2024-07-23 04:24:41.471450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:32.949 [2024-07-23 04:24:41.474182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:32.949 [2024-07-23 04:24:41.474219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:32.949 spare 00:29:32.949 04:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:32.949 [2024-07-23 04:24:41.708032] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:32.949 [2024-07-23 04:24:41.710356] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:32.949 [2024-07-23 04:24:41.710430] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:32.949 [2024-07-23 04:24:41.710498] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:32.949 [2024-07-23 04:24:41.710746] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:29:32.949 [2024-07-23 04:24:41.710773] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:32.949 [2024-07-23 04:24:41.711128] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:29:32.949 [2024-07-23 04:24:41.711405] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:29:32.949 [2024-07-23 04:24:41.711421] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:29:32.949 [2024-07-23 04:24:41.711630] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:32.949 04:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:32.949 04:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:32.949 04:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:32.949 04:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:32.949 04:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:32.949 04:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:32.949 04:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:32.949 04:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:32.949 04:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:32.949 04:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:32.949 04:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:32.949 04:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:33.207 04:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:33.207 "name": "raid_bdev1", 00:29:33.207 "uuid": "713805e3-eef1-4e68-9206-6b536ec88a9e", 00:29:33.207 "strip_size_kb": 0, 00:29:33.207 "state": "online", 00:29:33.207 "raid_level": "raid1", 00:29:33.207 "superblock": true, 00:29:33.207 "num_base_bdevs": 4, 00:29:33.207 "num_base_bdevs_discovered": 4, 00:29:33.207 "num_base_bdevs_operational": 4, 00:29:33.207 "base_bdevs_list": [ 00:29:33.207 { 00:29:33.207 "name": "BaseBdev1", 00:29:33.207 "uuid": "2472e966-0752-52ba-ae56-5416c98a0230", 00:29:33.207 "is_configured": true, 00:29:33.207 "data_offset": 2048, 00:29:33.207 "data_size": 63488 00:29:33.207 }, 00:29:33.207 { 00:29:33.207 "name": "BaseBdev2", 00:29:33.207 "uuid": "4bd64e88-f4ba-5ccb-81a2-698520ab1097", 00:29:33.207 "is_configured": true, 00:29:33.207 "data_offset": 2048, 00:29:33.207 "data_size": 63488 00:29:33.207 }, 00:29:33.207 { 00:29:33.207 "name": "BaseBdev3", 00:29:33.207 "uuid": "045c68c3-79e9-5882-bde6-8de1cfad874a", 00:29:33.207 "is_configured": true, 00:29:33.207 "data_offset": 2048, 00:29:33.207 "data_size": 63488 00:29:33.207 }, 00:29:33.207 { 00:29:33.207 "name": "BaseBdev4", 00:29:33.207 "uuid": "c402448e-8cde-5527-83b8-d8b502df57fa", 00:29:33.207 "is_configured": true, 00:29:33.207 "data_offset": 2048, 00:29:33.207 "data_size": 63488 00:29:33.207 } 00:29:33.207 ] 00:29:33.207 }' 00:29:33.207 04:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:33.207 04:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.773 04:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:33.773 04:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:34.032 [2024-07-23 04:24:42.739218] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:34.032 04:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:29:34.032 04:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:34.032 04:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:34.290 04:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:29:34.290 04:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:29:34.290 04:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:29:34.290 04:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:29:34.290 04:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:34.290 04:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:34.290 04:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:34.290 04:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:34.290 04:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:34.290 04:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:34.290 04:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:29:34.290 04:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:34.290 04:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:34.290 04:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:34.857 [2024-07-23 04:24:43.468849] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:29:34.857 /dev/nbd0 00:29:34.857 04:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:34.857 04:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:34.857 04:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:34.857 04:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:29:34.857 04:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:34.857 04:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:34.857 04:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:34.857 04:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:29:34.857 04:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:34.857 04:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:34.857 04:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:34.857 1+0 records in 00:29:34.857 1+0 records out 00:29:34.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264263 s, 15.5 MB/s 00:29:34.857 04:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:34.857 04:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:29:34.857 04:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:34.857 04:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:34.857 04:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:29:34.857 04:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:34.857 04:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:34.857 04:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:29:34.857 04:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:29:34.857 04:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:29:42.964 63488+0 records in 00:29:42.964 63488+0 records out 00:29:42.964 32505856 bytes (33 MB, 31 MiB) copied, 7.11115 s, 4.6 MB/s 00:29:42.964 04:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:42.964 04:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:42.964 04:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:42.964 04:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:42.964 04:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:29:42.964 04:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:42.964 04:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:42.964 [2024-07-23 04:24:50.903262] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:42.964 04:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:42.964 04:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:42.964 04:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:42.964 04:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:42.964 04:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:42.964 04:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:42.964 04:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:42.964 04:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:42.964 04:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:42.964 [2024-07-23 04:24:51.127977] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:42.964 04:24:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:42.964 04:24:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:42.964 04:24:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:42.964 04:24:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:42.964 04:24:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:42.964 04:24:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:42.964 04:24:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:42.964 04:24:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:42.964 04:24:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:42.964 04:24:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:42.964 04:24:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:42.964 04:24:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:42.964 04:24:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:42.964 "name": "raid_bdev1", 00:29:42.964 "uuid": "713805e3-eef1-4e68-9206-6b536ec88a9e", 00:29:42.964 "strip_size_kb": 0, 00:29:42.964 "state": "online", 00:29:42.964 "raid_level": "raid1", 00:29:42.964 "superblock": true, 00:29:42.964 "num_base_bdevs": 4, 00:29:42.964 "num_base_bdevs_discovered": 3, 00:29:42.964 "num_base_bdevs_operational": 3, 00:29:42.964 "base_bdevs_list": [ 00:29:42.964 { 00:29:42.964 "name": null, 00:29:42.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:42.964 "is_configured": false, 00:29:42.964 "data_offset": 2048, 00:29:42.964 "data_size": 63488 00:29:42.964 }, 00:29:42.964 { 00:29:42.964 "name": "BaseBdev2", 00:29:42.964 "uuid": "4bd64e88-f4ba-5ccb-81a2-698520ab1097", 00:29:42.964 "is_configured": true, 00:29:42.964 "data_offset": 2048, 00:29:42.964 "data_size": 63488 00:29:42.964 }, 00:29:42.964 { 00:29:42.964 "name": "BaseBdev3", 00:29:42.964 "uuid": "045c68c3-79e9-5882-bde6-8de1cfad874a", 00:29:42.964 "is_configured": true, 00:29:42.964 "data_offset": 2048, 00:29:42.964 "data_size": 63488 00:29:42.964 }, 00:29:42.964 { 00:29:42.964 "name": "BaseBdev4", 00:29:42.964 "uuid": "c402448e-8cde-5527-83b8-d8b502df57fa", 00:29:42.964 "is_configured": true, 00:29:42.964 "data_offset": 2048, 00:29:42.964 "data_size": 63488 00:29:42.964 } 00:29:42.964 ] 00:29:42.964 }' 00:29:42.964 04:24:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:42.964 04:24:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:43.222 04:24:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:43.481 [2024-07-23 04:24:52.090640] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:43.481 [2024-07-23 04:24:52.116888] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caad40 00:29:43.481 [2024-07-23 04:24:52.119261] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:43.481 04:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:29:44.415 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:44.415 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:44.415 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:44.415 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:44.415 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:44.415 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:44.415 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:44.674 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:44.674 "name": "raid_bdev1", 00:29:44.674 "uuid": "713805e3-eef1-4e68-9206-6b536ec88a9e", 00:29:44.674 "strip_size_kb": 0, 00:29:44.674 "state": "online", 00:29:44.674 "raid_level": "raid1", 00:29:44.674 "superblock": true, 00:29:44.674 "num_base_bdevs": 4, 00:29:44.674 "num_base_bdevs_discovered": 4, 00:29:44.674 "num_base_bdevs_operational": 4, 00:29:44.674 "process": { 00:29:44.674 "type": "rebuild", 00:29:44.674 "target": "spare", 00:29:44.674 "progress": { 00:29:44.674 "blocks": 24576, 00:29:44.674 "percent": 38 00:29:44.674 } 00:29:44.674 }, 00:29:44.674 "base_bdevs_list": [ 00:29:44.674 { 00:29:44.674 "name": "spare", 00:29:44.674 "uuid": "8db9f8ab-6fe4-511a-94a6-f6654a122b1a", 00:29:44.674 "is_configured": true, 00:29:44.674 "data_offset": 2048, 00:29:44.674 "data_size": 63488 00:29:44.674 }, 00:29:44.674 { 00:29:44.674 "name": "BaseBdev2", 00:29:44.674 "uuid": "4bd64e88-f4ba-5ccb-81a2-698520ab1097", 00:29:44.674 "is_configured": true, 00:29:44.674 "data_offset": 2048, 00:29:44.674 "data_size": 63488 00:29:44.674 }, 00:29:44.674 { 00:29:44.674 "name": "BaseBdev3", 00:29:44.674 "uuid": "045c68c3-79e9-5882-bde6-8de1cfad874a", 00:29:44.674 "is_configured": true, 00:29:44.674 "data_offset": 2048, 00:29:44.674 "data_size": 63488 00:29:44.674 }, 00:29:44.674 { 00:29:44.674 "name": "BaseBdev4", 00:29:44.674 "uuid": "c402448e-8cde-5527-83b8-d8b502df57fa", 00:29:44.674 "is_configured": true, 00:29:44.674 "data_offset": 2048, 00:29:44.674 "data_size": 63488 00:29:44.674 } 00:29:44.674 ] 00:29:44.674 }' 00:29:44.674 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:44.674 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:44.674 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:44.674 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:44.674 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:44.932 [2024-07-23 04:24:53.664132] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:45.191 [2024-07-23 04:24:53.732217] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:45.191 [2024-07-23 04:24:53.732295] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:45.191 [2024-07-23 04:24:53.732319] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:45.191 [2024-07-23 04:24:53.732334] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:45.191 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:45.191 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:45.191 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:45.191 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:45.191 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:45.191 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:45.191 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:45.191 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:45.191 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:45.191 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:45.191 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:45.191 04:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:45.449 04:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:45.449 "name": "raid_bdev1", 00:29:45.449 "uuid": "713805e3-eef1-4e68-9206-6b536ec88a9e", 00:29:45.449 "strip_size_kb": 0, 00:29:45.449 "state": "online", 00:29:45.449 "raid_level": "raid1", 00:29:45.449 "superblock": true, 00:29:45.449 "num_base_bdevs": 4, 00:29:45.449 "num_base_bdevs_discovered": 3, 00:29:45.449 "num_base_bdevs_operational": 3, 00:29:45.449 "base_bdevs_list": [ 00:29:45.449 { 00:29:45.449 "name": null, 00:29:45.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:45.449 "is_configured": false, 00:29:45.449 "data_offset": 2048, 00:29:45.449 "data_size": 63488 00:29:45.449 }, 00:29:45.449 { 00:29:45.449 "name": "BaseBdev2", 00:29:45.449 "uuid": "4bd64e88-f4ba-5ccb-81a2-698520ab1097", 00:29:45.449 "is_configured": true, 00:29:45.449 "data_offset": 2048, 00:29:45.449 "data_size": 63488 00:29:45.449 }, 00:29:45.449 { 00:29:45.449 "name": "BaseBdev3", 00:29:45.449 "uuid": "045c68c3-79e9-5882-bde6-8de1cfad874a", 00:29:45.449 "is_configured": true, 00:29:45.449 "data_offset": 2048, 00:29:45.449 "data_size": 63488 00:29:45.449 }, 00:29:45.449 { 00:29:45.449 "name": "BaseBdev4", 00:29:45.449 "uuid": "c402448e-8cde-5527-83b8-d8b502df57fa", 00:29:45.449 "is_configured": true, 00:29:45.449 "data_offset": 2048, 00:29:45.449 "data_size": 63488 00:29:45.449 } 00:29:45.449 ] 00:29:45.449 }' 00:29:45.449 04:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:45.449 04:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:46.015 04:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:46.015 04:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:46.015 04:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:46.015 04:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:46.015 04:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:46.015 04:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:46.015 04:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.273 04:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:46.273 "name": "raid_bdev1", 00:29:46.273 "uuid": "713805e3-eef1-4e68-9206-6b536ec88a9e", 00:29:46.273 "strip_size_kb": 0, 00:29:46.273 "state": "online", 00:29:46.273 "raid_level": "raid1", 00:29:46.273 "superblock": true, 00:29:46.273 "num_base_bdevs": 4, 00:29:46.273 "num_base_bdevs_discovered": 3, 00:29:46.273 "num_base_bdevs_operational": 3, 00:29:46.273 "base_bdevs_list": [ 00:29:46.273 { 00:29:46.273 "name": null, 00:29:46.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:46.273 "is_configured": false, 00:29:46.273 "data_offset": 2048, 00:29:46.273 "data_size": 63488 00:29:46.273 }, 00:29:46.273 { 00:29:46.273 "name": "BaseBdev2", 00:29:46.273 "uuid": "4bd64e88-f4ba-5ccb-81a2-698520ab1097", 00:29:46.273 "is_configured": true, 00:29:46.273 "data_offset": 2048, 00:29:46.273 "data_size": 63488 00:29:46.273 }, 00:29:46.273 { 00:29:46.273 "name": "BaseBdev3", 00:29:46.273 "uuid": "045c68c3-79e9-5882-bde6-8de1cfad874a", 00:29:46.273 "is_configured": true, 00:29:46.273 "data_offset": 2048, 00:29:46.273 "data_size": 63488 00:29:46.273 }, 00:29:46.273 { 00:29:46.273 "name": "BaseBdev4", 00:29:46.273 "uuid": "c402448e-8cde-5527-83b8-d8b502df57fa", 00:29:46.273 "is_configured": true, 00:29:46.273 "data_offset": 2048, 00:29:46.273 "data_size": 63488 00:29:46.273 } 00:29:46.273 ] 00:29:46.273 }' 00:29:46.273 04:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:46.273 04:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:46.273 04:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:46.273 04:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:46.273 04:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:46.840 [2024-07-23 04:24:55.383869] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:46.840 [2024-07-23 04:24:55.407964] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caae10 00:29:46.840 [2024-07-23 04:24:55.410334] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:46.840 04:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:47.774 04:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:47.774 04:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:47.774 04:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:47.774 04:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:47.774 04:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:47.774 04:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:47.774 04:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:48.033 04:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:48.033 "name": "raid_bdev1", 00:29:48.033 "uuid": "713805e3-eef1-4e68-9206-6b536ec88a9e", 00:29:48.033 "strip_size_kb": 0, 00:29:48.033 "state": "online", 00:29:48.033 "raid_level": "raid1", 00:29:48.033 "superblock": true, 00:29:48.033 "num_base_bdevs": 4, 00:29:48.033 "num_base_bdevs_discovered": 4, 00:29:48.033 "num_base_bdevs_operational": 4, 00:29:48.033 "process": { 00:29:48.033 "type": "rebuild", 00:29:48.033 "target": "spare", 00:29:48.033 "progress": { 00:29:48.033 "blocks": 24576, 00:29:48.033 "percent": 38 00:29:48.033 } 00:29:48.033 }, 00:29:48.033 "base_bdevs_list": [ 00:29:48.033 { 00:29:48.033 "name": "spare", 00:29:48.033 "uuid": "8db9f8ab-6fe4-511a-94a6-f6654a122b1a", 00:29:48.033 "is_configured": true, 00:29:48.033 "data_offset": 2048, 00:29:48.033 "data_size": 63488 00:29:48.033 }, 00:29:48.033 { 00:29:48.033 "name": "BaseBdev2", 00:29:48.033 "uuid": "4bd64e88-f4ba-5ccb-81a2-698520ab1097", 00:29:48.033 "is_configured": true, 00:29:48.033 "data_offset": 2048, 00:29:48.033 "data_size": 63488 00:29:48.033 }, 00:29:48.033 { 00:29:48.033 "name": "BaseBdev3", 00:29:48.033 "uuid": "045c68c3-79e9-5882-bde6-8de1cfad874a", 00:29:48.033 "is_configured": true, 00:29:48.033 "data_offset": 2048, 00:29:48.033 "data_size": 63488 00:29:48.033 }, 00:29:48.033 { 00:29:48.033 "name": "BaseBdev4", 00:29:48.033 "uuid": "c402448e-8cde-5527-83b8-d8b502df57fa", 00:29:48.033 "is_configured": true, 00:29:48.033 "data_offset": 2048, 00:29:48.033 "data_size": 63488 00:29:48.033 } 00:29:48.033 ] 00:29:48.033 }' 00:29:48.033 04:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:48.033 04:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:48.033 04:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:48.033 04:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:48.033 04:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:29:48.033 04:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:29:48.033 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:29:48.033 04:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:29:48.033 04:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:29:48.033 04:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:29:48.033 04:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:29:48.303 [2024-07-23 04:24:56.959745] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:48.596 [2024-07-23 04:24:57.123614] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000caae10 00:29:48.596 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:29:48.597 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:29:48.597 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:48.597 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:48.597 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:48.597 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:48.597 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:48.597 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:48.597 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:48.597 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:48.597 "name": "raid_bdev1", 00:29:48.597 "uuid": "713805e3-eef1-4e68-9206-6b536ec88a9e", 00:29:48.597 "strip_size_kb": 0, 00:29:48.597 "state": "online", 00:29:48.597 "raid_level": "raid1", 00:29:48.597 "superblock": true, 00:29:48.597 "num_base_bdevs": 4, 00:29:48.597 "num_base_bdevs_discovered": 3, 00:29:48.597 "num_base_bdevs_operational": 3, 00:29:48.597 "process": { 00:29:48.597 "type": "rebuild", 00:29:48.597 "target": "spare", 00:29:48.597 "progress": { 00:29:48.597 "blocks": 36864, 00:29:48.597 "percent": 58 00:29:48.597 } 00:29:48.597 }, 00:29:48.597 "base_bdevs_list": [ 00:29:48.597 { 00:29:48.597 "name": "spare", 00:29:48.597 "uuid": "8db9f8ab-6fe4-511a-94a6-f6654a122b1a", 00:29:48.597 "is_configured": true, 00:29:48.597 "data_offset": 2048, 00:29:48.597 "data_size": 63488 00:29:48.597 }, 00:29:48.597 { 00:29:48.597 "name": null, 00:29:48.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.597 "is_configured": false, 00:29:48.597 "data_offset": 2048, 00:29:48.597 "data_size": 63488 00:29:48.597 }, 00:29:48.597 { 00:29:48.597 "name": "BaseBdev3", 00:29:48.597 "uuid": "045c68c3-79e9-5882-bde6-8de1cfad874a", 00:29:48.597 "is_configured": true, 00:29:48.597 "data_offset": 2048, 00:29:48.597 "data_size": 63488 00:29:48.597 }, 00:29:48.597 { 00:29:48.597 "name": "BaseBdev4", 00:29:48.597 "uuid": "c402448e-8cde-5527-83b8-d8b502df57fa", 00:29:48.597 "is_configured": true, 00:29:48.597 "data_offset": 2048, 00:29:48.597 "data_size": 63488 00:29:48.597 } 00:29:48.597 ] 00:29:48.597 }' 00:29:48.869 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:48.869 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:48.869 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:48.869 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:48.869 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=983 00:29:48.869 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:48.869 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:48.869 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:48.869 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:48.869 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:48.869 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:48.869 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:48.869 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:48.869 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:48.869 "name": "raid_bdev1", 00:29:48.869 "uuid": "713805e3-eef1-4e68-9206-6b536ec88a9e", 00:29:48.869 "strip_size_kb": 0, 00:29:48.869 "state": "online", 00:29:48.869 "raid_level": "raid1", 00:29:48.869 "superblock": true, 00:29:48.869 "num_base_bdevs": 4, 00:29:48.869 "num_base_bdevs_discovered": 3, 00:29:48.869 "num_base_bdevs_operational": 3, 00:29:48.869 "process": { 00:29:48.869 "type": "rebuild", 00:29:48.869 "target": "spare", 00:29:48.869 "progress": { 00:29:48.869 "blocks": 40960, 00:29:48.869 "percent": 64 00:29:48.869 } 00:29:48.869 }, 00:29:48.869 "base_bdevs_list": [ 00:29:48.869 { 00:29:48.869 "name": "spare", 00:29:48.869 "uuid": "8db9f8ab-6fe4-511a-94a6-f6654a122b1a", 00:29:48.869 "is_configured": true, 00:29:48.869 "data_offset": 2048, 00:29:48.869 "data_size": 63488 00:29:48.869 }, 00:29:48.869 { 00:29:48.869 "name": null, 00:29:48.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.869 "is_configured": false, 00:29:48.869 "data_offset": 2048, 00:29:48.869 "data_size": 63488 00:29:48.869 }, 00:29:48.869 { 00:29:48.869 "name": "BaseBdev3", 00:29:48.869 "uuid": "045c68c3-79e9-5882-bde6-8de1cfad874a", 00:29:48.869 "is_configured": true, 00:29:48.869 "data_offset": 2048, 00:29:48.869 "data_size": 63488 00:29:48.869 }, 00:29:48.869 { 00:29:48.869 "name": "BaseBdev4", 00:29:48.869 "uuid": "c402448e-8cde-5527-83b8-d8b502df57fa", 00:29:48.869 "is_configured": true, 00:29:48.869 "data_offset": 2048, 00:29:48.869 "data_size": 63488 00:29:48.869 } 00:29:48.869 ] 00:29:48.869 }' 00:29:48.869 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:49.127 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:49.127 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:49.127 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:49.127 04:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:50.060 [2024-07-23 04:24:58.635798] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:50.060 [2024-07-23 04:24:58.635874] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:50.060 [2024-07-23 04:24:58.635985] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:50.060 04:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:50.060 04:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:50.060 04:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:50.060 04:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:50.060 04:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:50.060 04:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:50.060 04:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:50.060 04:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:50.318 04:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:50.318 "name": "raid_bdev1", 00:29:50.318 "uuid": "713805e3-eef1-4e68-9206-6b536ec88a9e", 00:29:50.318 "strip_size_kb": 0, 00:29:50.318 "state": "online", 00:29:50.318 "raid_level": "raid1", 00:29:50.318 "superblock": true, 00:29:50.318 "num_base_bdevs": 4, 00:29:50.319 "num_base_bdevs_discovered": 3, 00:29:50.319 "num_base_bdevs_operational": 3, 00:29:50.319 "base_bdevs_list": [ 00:29:50.319 { 00:29:50.319 "name": "spare", 00:29:50.319 "uuid": "8db9f8ab-6fe4-511a-94a6-f6654a122b1a", 00:29:50.319 "is_configured": true, 00:29:50.319 "data_offset": 2048, 00:29:50.319 "data_size": 63488 00:29:50.319 }, 00:29:50.319 { 00:29:50.319 "name": null, 00:29:50.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:50.319 "is_configured": false, 00:29:50.319 "data_offset": 2048, 00:29:50.319 "data_size": 63488 00:29:50.319 }, 00:29:50.319 { 00:29:50.319 "name": "BaseBdev3", 00:29:50.319 "uuid": "045c68c3-79e9-5882-bde6-8de1cfad874a", 00:29:50.319 "is_configured": true, 00:29:50.319 "data_offset": 2048, 00:29:50.319 "data_size": 63488 00:29:50.319 }, 00:29:50.319 { 00:29:50.319 "name": "BaseBdev4", 00:29:50.319 "uuid": "c402448e-8cde-5527-83b8-d8b502df57fa", 00:29:50.319 "is_configured": true, 00:29:50.319 "data_offset": 2048, 00:29:50.319 "data_size": 63488 00:29:50.319 } 00:29:50.319 ] 00:29:50.319 }' 00:29:50.319 04:24:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:50.319 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:50.319 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:50.319 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:50.319 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:29:50.319 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:50.319 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:50.319 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:50.319 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:50.319 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:50.319 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:50.319 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:50.577 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:50.577 "name": "raid_bdev1", 00:29:50.577 "uuid": "713805e3-eef1-4e68-9206-6b536ec88a9e", 00:29:50.577 "strip_size_kb": 0, 00:29:50.577 "state": "online", 00:29:50.577 "raid_level": "raid1", 00:29:50.577 "superblock": true, 00:29:50.577 "num_base_bdevs": 4, 00:29:50.577 "num_base_bdevs_discovered": 3, 00:29:50.577 "num_base_bdevs_operational": 3, 00:29:50.577 "base_bdevs_list": [ 00:29:50.577 { 00:29:50.577 "name": "spare", 00:29:50.577 "uuid": "8db9f8ab-6fe4-511a-94a6-f6654a122b1a", 00:29:50.577 "is_configured": true, 00:29:50.577 "data_offset": 2048, 00:29:50.577 "data_size": 63488 00:29:50.577 }, 00:29:50.577 { 00:29:50.577 "name": null, 00:29:50.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:50.577 "is_configured": false, 00:29:50.577 "data_offset": 2048, 00:29:50.577 "data_size": 63488 00:29:50.577 }, 00:29:50.577 { 00:29:50.577 "name": "BaseBdev3", 00:29:50.577 "uuid": "045c68c3-79e9-5882-bde6-8de1cfad874a", 00:29:50.577 "is_configured": true, 00:29:50.577 "data_offset": 2048, 00:29:50.577 "data_size": 63488 00:29:50.577 }, 00:29:50.577 { 00:29:50.577 "name": "BaseBdev4", 00:29:50.577 "uuid": "c402448e-8cde-5527-83b8-d8b502df57fa", 00:29:50.577 "is_configured": true, 00:29:50.577 "data_offset": 2048, 00:29:50.577 "data_size": 63488 00:29:50.577 } 00:29:50.577 ] 00:29:50.577 }' 00:29:50.577 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:50.577 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:50.577 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:50.835 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:50.835 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:50.835 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:50.835 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:50.835 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:50.835 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:50.835 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:50.835 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:50.835 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:50.835 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:50.835 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:50.835 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:50.835 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:50.835 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:50.835 "name": "raid_bdev1", 00:29:50.835 "uuid": "713805e3-eef1-4e68-9206-6b536ec88a9e", 00:29:50.835 "strip_size_kb": 0, 00:29:50.835 "state": "online", 00:29:50.835 "raid_level": "raid1", 00:29:50.835 "superblock": true, 00:29:50.835 "num_base_bdevs": 4, 00:29:50.835 "num_base_bdevs_discovered": 3, 00:29:50.835 "num_base_bdevs_operational": 3, 00:29:50.835 "base_bdevs_list": [ 00:29:50.835 { 00:29:50.835 "name": "spare", 00:29:50.835 "uuid": "8db9f8ab-6fe4-511a-94a6-f6654a122b1a", 00:29:50.835 "is_configured": true, 00:29:50.835 "data_offset": 2048, 00:29:50.835 "data_size": 63488 00:29:50.835 }, 00:29:50.836 { 00:29:50.836 "name": null, 00:29:50.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:50.836 "is_configured": false, 00:29:50.836 "data_offset": 2048, 00:29:50.836 "data_size": 63488 00:29:50.836 }, 00:29:50.836 { 00:29:50.836 "name": "BaseBdev3", 00:29:50.836 "uuid": "045c68c3-79e9-5882-bde6-8de1cfad874a", 00:29:50.836 "is_configured": true, 00:29:50.836 "data_offset": 2048, 00:29:50.836 "data_size": 63488 00:29:50.836 }, 00:29:50.836 { 00:29:50.836 "name": "BaseBdev4", 00:29:50.836 "uuid": "c402448e-8cde-5527-83b8-d8b502df57fa", 00:29:50.836 "is_configured": true, 00:29:50.836 "data_offset": 2048, 00:29:50.836 "data_size": 63488 00:29:50.836 } 00:29:50.836 ] 00:29:50.836 }' 00:29:50.836 04:24:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:50.836 04:24:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:51.402 04:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:51.660 [2024-07-23 04:25:00.392611] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:51.660 [2024-07-23 04:25:00.392648] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:51.660 [2024-07-23 04:25:00.392736] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:51.660 [2024-07-23 04:25:00.392830] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:51.660 [2024-07-23 04:25:00.392847] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:29:51.660 04:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:51.660 04:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:29:51.918 04:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:29:51.918 04:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:29:51.918 04:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:29:51.918 04:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:51.918 04:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:51.918 04:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:51.918 04:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:51.918 04:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:51.918 04:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:51.918 04:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:29:51.918 04:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:51.918 04:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:51.918 04:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:52.177 /dev/nbd0 00:29:52.177 04:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:52.177 04:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:52.177 04:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:52.177 04:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:29:52.177 04:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:52.177 04:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:52.177 04:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:52.177 04:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:29:52.177 04:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:52.177 04:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:52.177 04:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:52.177 1+0 records in 00:29:52.177 1+0 records out 00:29:52.177 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248832 s, 16.5 MB/s 00:29:52.177 04:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:52.177 04:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:29:52.177 04:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:52.177 04:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:52.177 04:25:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:29:52.177 04:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:52.177 04:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:52.177 04:25:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:29:52.435 /dev/nbd1 00:29:52.435 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:52.435 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:52.435 04:25:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:29:52.435 04:25:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:29:52.435 04:25:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:52.435 04:25:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:52.435 04:25:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:29:52.435 04:25:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:29:52.435 04:25:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:52.435 04:25:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:52.435 04:25:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:52.435 1+0 records in 00:29:52.435 1+0 records out 00:29:52.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224169 s, 18.3 MB/s 00:29:52.435 04:25:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:52.435 04:25:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:29:52.435 04:25:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:52.435 04:25:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:52.435 04:25:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:29:52.435 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:52.435 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:52.435 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:52.693 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:29:52.693 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:52.693 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:52.693 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:52.693 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:29:52.693 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:52.693 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:52.951 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:52.951 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:52.951 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:52.951 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:52.951 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:52.951 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:52.951 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:52.951 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:52.951 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:52.951 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:53.210 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:53.210 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:53.210 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:53.210 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:53.210 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:53.210 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:53.210 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:53.210 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:53.210 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:29:53.210 04:25:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:53.468 04:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:53.726 [2024-07-23 04:25:02.285006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:53.726 [2024-07-23 04:25:02.285061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:53.726 [2024-07-23 04:25:02.285090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044a80 00:29:53.726 [2024-07-23 04:25:02.285106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:53.726 [2024-07-23 04:25:02.287881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:53.726 [2024-07-23 04:25:02.287914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:53.726 [2024-07-23 04:25:02.288013] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:53.726 [2024-07-23 04:25:02.288075] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:53.726 [2024-07-23 04:25:02.288290] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:53.726 [2024-07-23 04:25:02.288401] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:53.726 spare 00:29:53.726 04:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:53.726 04:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:53.726 04:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:53.726 04:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:53.726 04:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:53.726 04:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:53.726 04:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:53.726 04:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:53.726 04:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:53.726 04:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:53.726 04:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:53.727 04:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:53.727 [2024-07-23 04:25:02.388740] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000045080 00:29:53.727 [2024-07-23 04:25:02.388764] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:53.727 [2024-07-23 04:25:02.389105] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc94c0 00:29:53.727 [2024-07-23 04:25:02.389356] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000045080 00:29:53.727 [2024-07-23 04:25:02.389375] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000045080 00:29:53.727 [2024-07-23 04:25:02.389552] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:53.985 04:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:53.985 "name": "raid_bdev1", 00:29:53.985 "uuid": "713805e3-eef1-4e68-9206-6b536ec88a9e", 00:29:53.985 "strip_size_kb": 0, 00:29:53.985 "state": "online", 00:29:53.985 "raid_level": "raid1", 00:29:53.985 "superblock": true, 00:29:53.985 "num_base_bdevs": 4, 00:29:53.985 "num_base_bdevs_discovered": 3, 00:29:53.985 "num_base_bdevs_operational": 3, 00:29:53.985 "base_bdevs_list": [ 00:29:53.985 { 00:29:53.985 "name": "spare", 00:29:53.985 "uuid": "8db9f8ab-6fe4-511a-94a6-f6654a122b1a", 00:29:53.985 "is_configured": true, 00:29:53.985 "data_offset": 2048, 00:29:53.985 "data_size": 63488 00:29:53.985 }, 00:29:53.985 { 00:29:53.985 "name": null, 00:29:53.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:53.985 "is_configured": false, 00:29:53.985 "data_offset": 2048, 00:29:53.985 "data_size": 63488 00:29:53.985 }, 00:29:53.985 { 00:29:53.985 "name": "BaseBdev3", 00:29:53.985 "uuid": "045c68c3-79e9-5882-bde6-8de1cfad874a", 00:29:53.985 "is_configured": true, 00:29:53.985 "data_offset": 2048, 00:29:53.985 "data_size": 63488 00:29:53.985 }, 00:29:53.985 { 00:29:53.985 "name": "BaseBdev4", 00:29:53.985 "uuid": "c402448e-8cde-5527-83b8-d8b502df57fa", 00:29:53.985 "is_configured": true, 00:29:53.985 "data_offset": 2048, 00:29:53.985 "data_size": 63488 00:29:53.985 } 00:29:53.985 ] 00:29:53.985 }' 00:29:53.985 04:25:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:53.985 04:25:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:54.551 04:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:54.551 04:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:54.551 04:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:54.551 04:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:54.551 04:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:54.551 04:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:54.551 04:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:54.551 04:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:54.551 "name": "raid_bdev1", 00:29:54.551 "uuid": "713805e3-eef1-4e68-9206-6b536ec88a9e", 00:29:54.551 "strip_size_kb": 0, 00:29:54.551 "state": "online", 00:29:54.551 "raid_level": "raid1", 00:29:54.551 "superblock": true, 00:29:54.551 "num_base_bdevs": 4, 00:29:54.551 "num_base_bdevs_discovered": 3, 00:29:54.551 "num_base_bdevs_operational": 3, 00:29:54.551 "base_bdevs_list": [ 00:29:54.551 { 00:29:54.551 "name": "spare", 00:29:54.551 "uuid": "8db9f8ab-6fe4-511a-94a6-f6654a122b1a", 00:29:54.551 "is_configured": true, 00:29:54.551 "data_offset": 2048, 00:29:54.551 "data_size": 63488 00:29:54.551 }, 00:29:54.551 { 00:29:54.551 "name": null, 00:29:54.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:54.551 "is_configured": false, 00:29:54.551 "data_offset": 2048, 00:29:54.551 "data_size": 63488 00:29:54.551 }, 00:29:54.551 { 00:29:54.551 "name": "BaseBdev3", 00:29:54.551 "uuid": "045c68c3-79e9-5882-bde6-8de1cfad874a", 00:29:54.551 "is_configured": true, 00:29:54.551 "data_offset": 2048, 00:29:54.551 "data_size": 63488 00:29:54.551 }, 00:29:54.551 { 00:29:54.551 "name": "BaseBdev4", 00:29:54.551 "uuid": "c402448e-8cde-5527-83b8-d8b502df57fa", 00:29:54.551 "is_configured": true, 00:29:54.551 "data_offset": 2048, 00:29:54.552 "data_size": 63488 00:29:54.552 } 00:29:54.552 ] 00:29:54.552 }' 00:29:54.552 04:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:54.810 04:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:54.810 04:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:54.810 04:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:54.810 04:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:54.810 04:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:55.068 04:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:29:55.068 04:25:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:55.634 [2024-07-23 04:25:04.118327] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:55.634 04:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:55.634 04:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:55.634 04:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:55.634 04:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:55.634 04:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:55.634 04:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:55.634 04:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:55.634 04:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:55.634 04:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:55.634 04:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:55.634 04:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:55.634 04:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:55.634 04:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:55.634 "name": "raid_bdev1", 00:29:55.634 "uuid": "713805e3-eef1-4e68-9206-6b536ec88a9e", 00:29:55.634 "strip_size_kb": 0, 00:29:55.634 "state": "online", 00:29:55.634 "raid_level": "raid1", 00:29:55.634 "superblock": true, 00:29:55.634 "num_base_bdevs": 4, 00:29:55.634 "num_base_bdevs_discovered": 2, 00:29:55.634 "num_base_bdevs_operational": 2, 00:29:55.634 "base_bdevs_list": [ 00:29:55.634 { 00:29:55.634 "name": null, 00:29:55.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:55.634 "is_configured": false, 00:29:55.634 "data_offset": 2048, 00:29:55.634 "data_size": 63488 00:29:55.634 }, 00:29:55.634 { 00:29:55.634 "name": null, 00:29:55.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:55.634 "is_configured": false, 00:29:55.634 "data_offset": 2048, 00:29:55.634 "data_size": 63488 00:29:55.634 }, 00:29:55.634 { 00:29:55.634 "name": "BaseBdev3", 00:29:55.634 "uuid": "045c68c3-79e9-5882-bde6-8de1cfad874a", 00:29:55.634 "is_configured": true, 00:29:55.634 "data_offset": 2048, 00:29:55.634 "data_size": 63488 00:29:55.634 }, 00:29:55.634 { 00:29:55.634 "name": "BaseBdev4", 00:29:55.634 "uuid": "c402448e-8cde-5527-83b8-d8b502df57fa", 00:29:55.634 "is_configured": true, 00:29:55.634 "data_offset": 2048, 00:29:55.634 "data_size": 63488 00:29:55.634 } 00:29:55.634 ] 00:29:55.634 }' 00:29:55.634 04:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:55.634 04:25:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:56.201 04:25:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:56.459 [2024-07-23 04:25:05.165159] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:56.459 [2024-07-23 04:25:05.165369] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:56.459 [2024-07-23 04:25:05.165391] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:56.459 [2024-07-23 04:25:05.165431] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:56.459 [2024-07-23 04:25:05.186839] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc9590 00:29:56.459 [2024-07-23 04:25:05.189177] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:56.459 04:25:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:29:57.834 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:57.834 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:57.834 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:57.834 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:57.834 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:57.834 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:57.834 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:57.834 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:57.834 "name": "raid_bdev1", 00:29:57.834 "uuid": "713805e3-eef1-4e68-9206-6b536ec88a9e", 00:29:57.834 "strip_size_kb": 0, 00:29:57.834 "state": "online", 00:29:57.834 "raid_level": "raid1", 00:29:57.834 "superblock": true, 00:29:57.834 "num_base_bdevs": 4, 00:29:57.834 "num_base_bdevs_discovered": 3, 00:29:57.834 "num_base_bdevs_operational": 3, 00:29:57.834 "process": { 00:29:57.834 "type": "rebuild", 00:29:57.834 "target": "spare", 00:29:57.834 "progress": { 00:29:57.834 "blocks": 24576, 00:29:57.834 "percent": 38 00:29:57.834 } 00:29:57.834 }, 00:29:57.834 "base_bdevs_list": [ 00:29:57.834 { 00:29:57.834 "name": "spare", 00:29:57.834 "uuid": "8db9f8ab-6fe4-511a-94a6-f6654a122b1a", 00:29:57.834 "is_configured": true, 00:29:57.834 "data_offset": 2048, 00:29:57.834 "data_size": 63488 00:29:57.834 }, 00:29:57.834 { 00:29:57.834 "name": null, 00:29:57.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:57.834 "is_configured": false, 00:29:57.834 "data_offset": 2048, 00:29:57.834 "data_size": 63488 00:29:57.834 }, 00:29:57.834 { 00:29:57.834 "name": "BaseBdev3", 00:29:57.834 "uuid": "045c68c3-79e9-5882-bde6-8de1cfad874a", 00:29:57.834 "is_configured": true, 00:29:57.834 "data_offset": 2048, 00:29:57.834 "data_size": 63488 00:29:57.834 }, 00:29:57.834 { 00:29:57.834 "name": "BaseBdev4", 00:29:57.834 "uuid": "c402448e-8cde-5527-83b8-d8b502df57fa", 00:29:57.834 "is_configured": true, 00:29:57.834 "data_offset": 2048, 00:29:57.834 "data_size": 63488 00:29:57.834 } 00:29:57.834 ] 00:29:57.834 }' 00:29:57.834 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:57.834 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:57.834 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:57.834 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:57.834 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:58.093 [2024-07-23 04:25:06.743127] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:58.093 [2024-07-23 04:25:06.802236] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:58.093 [2024-07-23 04:25:06.802295] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:58.093 [2024-07-23 04:25:06.802320] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:58.093 [2024-07-23 04:25:06.802332] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:58.093 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:58.093 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:58.093 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:58.093 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:58.093 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:58.093 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:58.093 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:58.093 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:58.093 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:58.093 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:58.093 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:58.093 04:25:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:58.352 04:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:58.352 "name": "raid_bdev1", 00:29:58.352 "uuid": "713805e3-eef1-4e68-9206-6b536ec88a9e", 00:29:58.352 "strip_size_kb": 0, 00:29:58.352 "state": "online", 00:29:58.352 "raid_level": "raid1", 00:29:58.352 "superblock": true, 00:29:58.352 "num_base_bdevs": 4, 00:29:58.352 "num_base_bdevs_discovered": 2, 00:29:58.352 "num_base_bdevs_operational": 2, 00:29:58.352 "base_bdevs_list": [ 00:29:58.352 { 00:29:58.352 "name": null, 00:29:58.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:58.352 "is_configured": false, 00:29:58.352 "data_offset": 2048, 00:29:58.352 "data_size": 63488 00:29:58.352 }, 00:29:58.352 { 00:29:58.352 "name": null, 00:29:58.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:58.352 "is_configured": false, 00:29:58.352 "data_offset": 2048, 00:29:58.352 "data_size": 63488 00:29:58.352 }, 00:29:58.352 { 00:29:58.352 "name": "BaseBdev3", 00:29:58.352 "uuid": "045c68c3-79e9-5882-bde6-8de1cfad874a", 00:29:58.352 "is_configured": true, 00:29:58.352 "data_offset": 2048, 00:29:58.352 "data_size": 63488 00:29:58.352 }, 00:29:58.352 { 00:29:58.352 "name": "BaseBdev4", 00:29:58.352 "uuid": "c402448e-8cde-5527-83b8-d8b502df57fa", 00:29:58.352 "is_configured": true, 00:29:58.352 "data_offset": 2048, 00:29:58.352 "data_size": 63488 00:29:58.352 } 00:29:58.352 ] 00:29:58.352 }' 00:29:58.352 04:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:58.352 04:25:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:58.920 04:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:59.179 [2024-07-23 04:25:07.853149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:59.179 [2024-07-23 04:25:07.853208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:59.179 [2024-07-23 04:25:07.853237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045680 00:29:59.179 [2024-07-23 04:25:07.853252] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:59.179 [2024-07-23 04:25:07.853855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:59.179 [2024-07-23 04:25:07.853880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:59.179 [2024-07-23 04:25:07.853985] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:59.179 [2024-07-23 04:25:07.854009] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:59.179 [2024-07-23 04:25:07.854027] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:59.179 [2024-07-23 04:25:07.854058] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:59.179 [2024-07-23 04:25:07.875306] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc9660 00:29:59.179 spare 00:29:59.179 [2024-07-23 04:25:07.877661] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:59.179 04:25:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:30:00.116 04:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:00.116 04:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:00.116 04:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:00.116 04:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:00.116 04:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:00.116 04:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:00.375 04:25:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:00.375 04:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:00.375 "name": "raid_bdev1", 00:30:00.375 "uuid": "713805e3-eef1-4e68-9206-6b536ec88a9e", 00:30:00.375 "strip_size_kb": 0, 00:30:00.375 "state": "online", 00:30:00.375 "raid_level": "raid1", 00:30:00.375 "superblock": true, 00:30:00.375 "num_base_bdevs": 4, 00:30:00.375 "num_base_bdevs_discovered": 3, 00:30:00.375 "num_base_bdevs_operational": 3, 00:30:00.376 "process": { 00:30:00.376 "type": "rebuild", 00:30:00.376 "target": "spare", 00:30:00.376 "progress": { 00:30:00.376 "blocks": 22528, 00:30:00.376 "percent": 35 00:30:00.376 } 00:30:00.376 }, 00:30:00.376 "base_bdevs_list": [ 00:30:00.376 { 00:30:00.376 "name": "spare", 00:30:00.376 "uuid": "8db9f8ab-6fe4-511a-94a6-f6654a122b1a", 00:30:00.376 "is_configured": true, 00:30:00.376 "data_offset": 2048, 00:30:00.376 "data_size": 63488 00:30:00.376 }, 00:30:00.376 { 00:30:00.376 "name": null, 00:30:00.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.376 "is_configured": false, 00:30:00.376 "data_offset": 2048, 00:30:00.376 "data_size": 63488 00:30:00.376 }, 00:30:00.376 { 00:30:00.376 "name": "BaseBdev3", 00:30:00.376 "uuid": "045c68c3-79e9-5882-bde6-8de1cfad874a", 00:30:00.376 "is_configured": true, 00:30:00.376 "data_offset": 2048, 00:30:00.376 "data_size": 63488 00:30:00.376 }, 00:30:00.376 { 00:30:00.376 "name": "BaseBdev4", 00:30:00.376 "uuid": "c402448e-8cde-5527-83b8-d8b502df57fa", 00:30:00.376 "is_configured": true, 00:30:00.376 "data_offset": 2048, 00:30:00.376 "data_size": 63488 00:30:00.376 } 00:30:00.376 ] 00:30:00.376 }' 00:30:00.376 04:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:00.376 04:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:00.376 04:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:00.376 04:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:00.376 04:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:00.635 [2024-07-23 04:25:09.366985] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:00.635 [2024-07-23 04:25:09.389853] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:00.635 [2024-07-23 04:25:09.389914] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:00.635 [2024-07-23 04:25:09.389936] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:00.635 [2024-07-23 04:25:09.389950] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:00.894 04:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:00.894 04:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:00.894 04:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:00.894 04:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:00.894 04:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:00.894 04:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:00.894 04:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:00.894 04:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:00.894 04:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:00.894 04:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:00.894 04:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:00.894 04:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:00.894 04:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:00.894 "name": "raid_bdev1", 00:30:00.894 "uuid": "713805e3-eef1-4e68-9206-6b536ec88a9e", 00:30:00.894 "strip_size_kb": 0, 00:30:00.894 "state": "online", 00:30:00.894 "raid_level": "raid1", 00:30:00.894 "superblock": true, 00:30:00.894 "num_base_bdevs": 4, 00:30:00.894 "num_base_bdevs_discovered": 2, 00:30:00.894 "num_base_bdevs_operational": 2, 00:30:00.894 "base_bdevs_list": [ 00:30:00.894 { 00:30:00.894 "name": null, 00:30:00.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.894 "is_configured": false, 00:30:00.894 "data_offset": 2048, 00:30:00.894 "data_size": 63488 00:30:00.894 }, 00:30:00.894 { 00:30:00.894 "name": null, 00:30:00.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.894 "is_configured": false, 00:30:00.894 "data_offset": 2048, 00:30:00.894 "data_size": 63488 00:30:00.894 }, 00:30:00.894 { 00:30:00.894 "name": "BaseBdev3", 00:30:00.894 "uuid": "045c68c3-79e9-5882-bde6-8de1cfad874a", 00:30:00.894 "is_configured": true, 00:30:00.894 "data_offset": 2048, 00:30:00.894 "data_size": 63488 00:30:00.894 }, 00:30:00.894 { 00:30:00.894 "name": "BaseBdev4", 00:30:00.894 "uuid": "c402448e-8cde-5527-83b8-d8b502df57fa", 00:30:00.894 "is_configured": true, 00:30:00.894 "data_offset": 2048, 00:30:00.894 "data_size": 63488 00:30:00.894 } 00:30:00.894 ] 00:30:00.894 }' 00:30:00.894 04:25:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:00.894 04:25:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:01.502 04:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:01.502 04:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:01.502 04:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:01.502 04:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:01.502 04:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:01.502 04:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:01.502 04:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:01.761 04:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:01.761 "name": "raid_bdev1", 00:30:01.761 "uuid": "713805e3-eef1-4e68-9206-6b536ec88a9e", 00:30:01.761 "strip_size_kb": 0, 00:30:01.761 "state": "online", 00:30:01.761 "raid_level": "raid1", 00:30:01.761 "superblock": true, 00:30:01.761 "num_base_bdevs": 4, 00:30:01.761 "num_base_bdevs_discovered": 2, 00:30:01.761 "num_base_bdevs_operational": 2, 00:30:01.761 "base_bdevs_list": [ 00:30:01.761 { 00:30:01.761 "name": null, 00:30:01.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:01.761 "is_configured": false, 00:30:01.761 "data_offset": 2048, 00:30:01.761 "data_size": 63488 00:30:01.761 }, 00:30:01.761 { 00:30:01.761 "name": null, 00:30:01.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:01.761 "is_configured": false, 00:30:01.761 "data_offset": 2048, 00:30:01.761 "data_size": 63488 00:30:01.761 }, 00:30:01.761 { 00:30:01.761 "name": "BaseBdev3", 00:30:01.761 "uuid": "045c68c3-79e9-5882-bde6-8de1cfad874a", 00:30:01.761 "is_configured": true, 00:30:01.761 "data_offset": 2048, 00:30:01.761 "data_size": 63488 00:30:01.761 }, 00:30:01.761 { 00:30:01.761 "name": "BaseBdev4", 00:30:01.761 "uuid": "c402448e-8cde-5527-83b8-d8b502df57fa", 00:30:01.761 "is_configured": true, 00:30:01.761 "data_offset": 2048, 00:30:01.761 "data_size": 63488 00:30:01.761 } 00:30:01.761 ] 00:30:01.761 }' 00:30:01.761 04:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:01.761 04:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:01.761 04:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:01.761 04:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:01.762 04:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:30:02.021 04:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:02.279 [2024-07-23 04:25:10.905474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:02.279 [2024-07-23 04:25:10.905545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:02.279 [2024-07-23 04:25:10.905573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045c80 00:30:02.279 [2024-07-23 04:25:10.905595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:02.279 [2024-07-23 04:25:10.906168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:02.279 [2024-07-23 04:25:10.906199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:02.279 [2024-07-23 04:25:10.906294] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:02.279 [2024-07-23 04:25:10.906316] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:02.279 [2024-07-23 04:25:10.906330] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:02.279 BaseBdev1 00:30:02.279 04:25:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:30:03.216 04:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:03.216 04:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:03.216 04:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:03.216 04:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:03.216 04:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:03.216 04:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:03.216 04:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:03.216 04:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:03.216 04:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:03.216 04:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:03.216 04:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:03.216 04:25:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:03.476 04:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:03.476 "name": "raid_bdev1", 00:30:03.476 "uuid": "713805e3-eef1-4e68-9206-6b536ec88a9e", 00:30:03.476 "strip_size_kb": 0, 00:30:03.476 "state": "online", 00:30:03.476 "raid_level": "raid1", 00:30:03.476 "superblock": true, 00:30:03.476 "num_base_bdevs": 4, 00:30:03.476 "num_base_bdevs_discovered": 2, 00:30:03.476 "num_base_bdevs_operational": 2, 00:30:03.476 "base_bdevs_list": [ 00:30:03.476 { 00:30:03.476 "name": null, 00:30:03.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:03.476 "is_configured": false, 00:30:03.476 "data_offset": 2048, 00:30:03.476 "data_size": 63488 00:30:03.476 }, 00:30:03.476 { 00:30:03.476 "name": null, 00:30:03.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:03.476 "is_configured": false, 00:30:03.476 "data_offset": 2048, 00:30:03.476 "data_size": 63488 00:30:03.476 }, 00:30:03.476 { 00:30:03.476 "name": "BaseBdev3", 00:30:03.476 "uuid": "045c68c3-79e9-5882-bde6-8de1cfad874a", 00:30:03.476 "is_configured": true, 00:30:03.476 "data_offset": 2048, 00:30:03.476 "data_size": 63488 00:30:03.476 }, 00:30:03.476 { 00:30:03.476 "name": "BaseBdev4", 00:30:03.476 "uuid": "c402448e-8cde-5527-83b8-d8b502df57fa", 00:30:03.476 "is_configured": true, 00:30:03.476 "data_offset": 2048, 00:30:03.476 "data_size": 63488 00:30:03.476 } 00:30:03.476 ] 00:30:03.476 }' 00:30:03.476 04:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:03.476 04:25:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:04.044 04:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:04.044 04:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:04.044 04:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:04.044 04:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:04.044 04:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:04.044 04:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:04.044 04:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:04.303 04:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:04.303 "name": "raid_bdev1", 00:30:04.303 "uuid": "713805e3-eef1-4e68-9206-6b536ec88a9e", 00:30:04.303 "strip_size_kb": 0, 00:30:04.303 "state": "online", 00:30:04.303 "raid_level": "raid1", 00:30:04.303 "superblock": true, 00:30:04.303 "num_base_bdevs": 4, 00:30:04.303 "num_base_bdevs_discovered": 2, 00:30:04.303 "num_base_bdevs_operational": 2, 00:30:04.303 "base_bdevs_list": [ 00:30:04.303 { 00:30:04.303 "name": null, 00:30:04.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:04.303 "is_configured": false, 00:30:04.303 "data_offset": 2048, 00:30:04.303 "data_size": 63488 00:30:04.303 }, 00:30:04.303 { 00:30:04.303 "name": null, 00:30:04.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:04.303 "is_configured": false, 00:30:04.303 "data_offset": 2048, 00:30:04.303 "data_size": 63488 00:30:04.303 }, 00:30:04.303 { 00:30:04.303 "name": "BaseBdev3", 00:30:04.303 "uuid": "045c68c3-79e9-5882-bde6-8de1cfad874a", 00:30:04.303 "is_configured": true, 00:30:04.303 "data_offset": 2048, 00:30:04.303 "data_size": 63488 00:30:04.303 }, 00:30:04.303 { 00:30:04.303 "name": "BaseBdev4", 00:30:04.303 "uuid": "c402448e-8cde-5527-83b8-d8b502df57fa", 00:30:04.303 "is_configured": true, 00:30:04.303 "data_offset": 2048, 00:30:04.303 "data_size": 63488 00:30:04.303 } 00:30:04.303 ] 00:30:04.303 }' 00:30:04.303 04:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:04.303 04:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:04.303 04:25:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:04.303 04:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:04.303 04:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:04.303 04:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:30:04.303 04:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:04.303 04:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:04.303 04:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:04.303 04:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:04.303 04:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:04.303 04:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:04.303 04:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:04.303 04:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:04.303 04:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:30:04.303 04:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:04.563 [2024-07-23 04:25:13.239875] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:04.563 [2024-07-23 04:25:13.240041] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:04.563 [2024-07-23 04:25:13.240072] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:04.563 request: 00:30:04.563 { 00:30:04.563 "base_bdev": "BaseBdev1", 00:30:04.563 "raid_bdev": "raid_bdev1", 00:30:04.563 "method": "bdev_raid_add_base_bdev", 00:30:04.563 "req_id": 1 00:30:04.563 } 00:30:04.563 Got JSON-RPC error response 00:30:04.563 response: 00:30:04.563 { 00:30:04.563 "code": -22, 00:30:04.563 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:04.563 } 00:30:04.563 04:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:30:04.563 04:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:04.563 04:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:04.563 04:25:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:04.563 04:25:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:30:05.498 04:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:05.498 04:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:05.498 04:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:05.498 04:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:05.498 04:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:05.498 04:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:05.498 04:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:05.498 04:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:05.498 04:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:05.498 04:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:05.498 04:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:05.498 04:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:05.757 04:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:05.757 "name": "raid_bdev1", 00:30:05.757 "uuid": "713805e3-eef1-4e68-9206-6b536ec88a9e", 00:30:05.757 "strip_size_kb": 0, 00:30:05.757 "state": "online", 00:30:05.757 "raid_level": "raid1", 00:30:05.757 "superblock": true, 00:30:05.757 "num_base_bdevs": 4, 00:30:05.757 "num_base_bdevs_discovered": 2, 00:30:05.757 "num_base_bdevs_operational": 2, 00:30:05.757 "base_bdevs_list": [ 00:30:05.757 { 00:30:05.757 "name": null, 00:30:05.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.757 "is_configured": false, 00:30:05.757 "data_offset": 2048, 00:30:05.757 "data_size": 63488 00:30:05.757 }, 00:30:05.757 { 00:30:05.757 "name": null, 00:30:05.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.757 "is_configured": false, 00:30:05.757 "data_offset": 2048, 00:30:05.757 "data_size": 63488 00:30:05.757 }, 00:30:05.757 { 00:30:05.757 "name": "BaseBdev3", 00:30:05.757 "uuid": "045c68c3-79e9-5882-bde6-8de1cfad874a", 00:30:05.757 "is_configured": true, 00:30:05.757 "data_offset": 2048, 00:30:05.757 "data_size": 63488 00:30:05.758 }, 00:30:05.758 { 00:30:05.758 "name": "BaseBdev4", 00:30:05.758 "uuid": "c402448e-8cde-5527-83b8-d8b502df57fa", 00:30:05.758 "is_configured": true, 00:30:05.758 "data_offset": 2048, 00:30:05.758 "data_size": 63488 00:30:05.758 } 00:30:05.758 ] 00:30:05.758 }' 00:30:05.758 04:25:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:05.758 04:25:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:06.326 04:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:06.326 04:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:06.326 04:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:06.326 04:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:06.326 04:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:06.326 04:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:06.326 04:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:06.585 04:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:06.585 "name": "raid_bdev1", 00:30:06.585 "uuid": "713805e3-eef1-4e68-9206-6b536ec88a9e", 00:30:06.585 "strip_size_kb": 0, 00:30:06.585 "state": "online", 00:30:06.585 "raid_level": "raid1", 00:30:06.585 "superblock": true, 00:30:06.585 "num_base_bdevs": 4, 00:30:06.585 "num_base_bdevs_discovered": 2, 00:30:06.585 "num_base_bdevs_operational": 2, 00:30:06.585 "base_bdevs_list": [ 00:30:06.585 { 00:30:06.585 "name": null, 00:30:06.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:06.585 "is_configured": false, 00:30:06.585 "data_offset": 2048, 00:30:06.585 "data_size": 63488 00:30:06.585 }, 00:30:06.585 { 00:30:06.585 "name": null, 00:30:06.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:06.585 "is_configured": false, 00:30:06.585 "data_offset": 2048, 00:30:06.585 "data_size": 63488 00:30:06.585 }, 00:30:06.585 { 00:30:06.585 "name": "BaseBdev3", 00:30:06.585 "uuid": "045c68c3-79e9-5882-bde6-8de1cfad874a", 00:30:06.585 "is_configured": true, 00:30:06.585 "data_offset": 2048, 00:30:06.585 "data_size": 63488 00:30:06.585 }, 00:30:06.585 { 00:30:06.585 "name": "BaseBdev4", 00:30:06.585 "uuid": "c402448e-8cde-5527-83b8-d8b502df57fa", 00:30:06.585 "is_configured": true, 00:30:06.585 "data_offset": 2048, 00:30:06.585 "data_size": 63488 00:30:06.585 } 00:30:06.585 ] 00:30:06.585 }' 00:30:06.585 04:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:06.585 04:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:06.585 04:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:06.844 04:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:06.844 04:25:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 2784890 00:30:06.844 04:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2784890 ']' 00:30:06.844 04:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 2784890 00:30:06.844 04:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:30:06.844 04:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:06.844 04:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2784890 00:30:06.844 04:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:06.844 04:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:06.844 04:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2784890' 00:30:06.844 killing process with pid 2784890 00:30:06.844 04:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 2784890 00:30:06.844 Received shutdown signal, test time was about 60.000000 seconds 00:30:06.844 00:30:06.844 Latency(us) 00:30:06.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.844 =================================================================================================================== 00:30:06.844 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:06.844 [2024-07-23 04:25:15.443531] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:06.844 [2024-07-23 04:25:15.443661] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:06.844 04:25:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 2784890 00:30:06.844 [2024-07-23 04:25:15.443735] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:06.844 [2024-07-23 04:25:15.443755] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000045080 name raid_bdev1, state offline 00:30:07.413 [2024-07-23 04:25:16.060567] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:30:09.320 00:30:09.320 real 0m42.010s 00:30:09.320 user 1m0.253s 00:30:09.320 sys 0m6.986s 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.320 ************************************ 00:30:09.320 END TEST raid_rebuild_test_sb 00:30:09.320 ************************************ 00:30:09.320 04:25:17 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:30:09.320 04:25:17 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:30:09.320 04:25:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:30:09.320 04:25:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:09.320 04:25:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:09.320 ************************************ 00:30:09.320 START TEST raid_rebuild_test_io 00:30:09.320 ************************************ 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false true true 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2792403 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2792403 /var/tmp/spdk-raid.sock 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 2792403 ']' 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:09.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:09.320 04:25:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:09.320 [2024-07-23 04:25:18.015265] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:09.320 [2024-07-23 04:25:18.015381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2792403 ] 00:30:09.320 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:09.320 Zero copy mechanism will not be used. 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:09.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:09.580 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:09.580 [2024-07-23 04:25:18.240518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.839 [2024-07-23 04:25:18.522753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.098 [2024-07-23 04:25:18.834951] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:10.098 [2024-07-23 04:25:18.834987] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:10.357 04:25:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:10.357 04:25:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:30:10.357 04:25:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:10.357 04:25:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:10.616 BaseBdev1_malloc 00:30:10.616 04:25:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:10.876 [2024-07-23 04:25:19.404038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:10.876 [2024-07-23 04:25:19.404104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:10.876 [2024-07-23 04:25:19.404134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:30:10.876 [2024-07-23 04:25:19.404162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:10.876 [2024-07-23 04:25:19.406897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:10.876 [2024-07-23 04:25:19.406936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:10.876 BaseBdev1 00:30:10.876 04:25:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:10.876 04:25:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:11.135 BaseBdev2_malloc 00:30:11.135 04:25:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:11.135 [2024-07-23 04:25:19.824226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:11.135 [2024-07-23 04:25:19.824287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:11.135 [2024-07-23 04:25:19.824314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:30:11.135 [2024-07-23 04:25:19.824336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:11.135 [2024-07-23 04:25:19.827074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:11.135 [2024-07-23 04:25:19.827112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:11.135 BaseBdev2 00:30:11.135 04:25:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:11.135 04:25:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:11.394 BaseBdev3_malloc 00:30:11.394 04:25:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:11.653 [2024-07-23 04:25:20.312951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:11.653 [2024-07-23 04:25:20.313025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:11.653 [2024-07-23 04:25:20.313056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:30:11.653 [2024-07-23 04:25:20.313075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:11.653 [2024-07-23 04:25:20.315840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:11.653 [2024-07-23 04:25:20.315880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:11.653 BaseBdev3 00:30:11.654 04:25:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:11.654 04:25:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:11.913 BaseBdev4_malloc 00:30:11.913 04:25:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:12.172 [2024-07-23 04:25:20.765071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:12.172 [2024-07-23 04:25:20.765150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:12.172 [2024-07-23 04:25:20.765178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:30:12.172 [2024-07-23 04:25:20.765197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:12.172 [2024-07-23 04:25:20.767985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:12.172 [2024-07-23 04:25:20.768022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:12.172 BaseBdev4 00:30:12.172 04:25:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:30:12.431 spare_malloc 00:30:12.431 04:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:12.688 spare_delay 00:30:12.688 04:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:12.688 [2024-07-23 04:25:21.396636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:12.688 [2024-07-23 04:25:21.396695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:12.688 [2024-07-23 04:25:21.396722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:30:12.688 [2024-07-23 04:25:21.396740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:12.688 [2024-07-23 04:25:21.399545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:12.688 [2024-07-23 04:25:21.399581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:12.688 spare 00:30:12.688 04:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:30:12.946 [2024-07-23 04:25:21.633303] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:12.946 [2024-07-23 04:25:21.635629] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:12.946 [2024-07-23 04:25:21.635700] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:12.946 [2024-07-23 04:25:21.635767] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:12.946 [2024-07-23 04:25:21.635872] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:30:12.946 [2024-07-23 04:25:21.635889] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:30:12.946 [2024-07-23 04:25:21.636262] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:30:12.946 [2024-07-23 04:25:21.636522] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:30:12.946 [2024-07-23 04:25:21.636541] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:30:12.946 [2024-07-23 04:25:21.636750] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:12.946 04:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:12.946 04:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:12.946 04:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:12.946 04:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:12.946 04:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:12.946 04:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:12.946 04:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:12.946 04:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:12.946 04:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:12.946 04:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:12.946 04:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:12.946 04:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:13.204 04:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:13.204 "name": "raid_bdev1", 00:30:13.204 "uuid": "5109cadf-7d68-46f1-b6f3-22748f641575", 00:30:13.204 "strip_size_kb": 0, 00:30:13.204 "state": "online", 00:30:13.204 "raid_level": "raid1", 00:30:13.204 "superblock": false, 00:30:13.204 "num_base_bdevs": 4, 00:30:13.204 "num_base_bdevs_discovered": 4, 00:30:13.204 "num_base_bdevs_operational": 4, 00:30:13.204 "base_bdevs_list": [ 00:30:13.204 { 00:30:13.204 "name": "BaseBdev1", 00:30:13.204 "uuid": "ef8c6fe9-688a-5344-ac19-1160da1ad50a", 00:30:13.204 "is_configured": true, 00:30:13.204 "data_offset": 0, 00:30:13.204 "data_size": 65536 00:30:13.204 }, 00:30:13.204 { 00:30:13.204 "name": "BaseBdev2", 00:30:13.204 "uuid": "21b31039-ebcc-5854-8472-6d2f8ff3f2d3", 00:30:13.204 "is_configured": true, 00:30:13.204 "data_offset": 0, 00:30:13.204 "data_size": 65536 00:30:13.204 }, 00:30:13.204 { 00:30:13.204 "name": "BaseBdev3", 00:30:13.204 "uuid": "a78b5826-f08c-5b4b-a136-d122d394130e", 00:30:13.204 "is_configured": true, 00:30:13.204 "data_offset": 0, 00:30:13.204 "data_size": 65536 00:30:13.204 }, 00:30:13.204 { 00:30:13.204 "name": "BaseBdev4", 00:30:13.204 "uuid": "e87e63f5-343e-5f84-8f03-17724672be67", 00:30:13.204 "is_configured": true, 00:30:13.204 "data_offset": 0, 00:30:13.204 "data_size": 65536 00:30:13.204 } 00:30:13.204 ] 00:30:13.204 }' 00:30:13.205 04:25:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:13.205 04:25:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:14.138 04:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:14.138 04:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:30:14.395 [2024-07-23 04:25:22.933431] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:14.395 04:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:30:14.395 04:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:14.395 04:25:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:14.961 04:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:30:14.961 04:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:30:14.961 04:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:14.961 04:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:30:14.961 [2024-07-23 04:25:23.677896] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:14.961 [2024-07-23 04:25:23.677985] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010a50 00:30:14.961 [2024-07-23 04:25:23.680373] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000010a50 00:30:14.961 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:14.961 Zero copy mechanism will not be used. 00:30:14.961 Running I/O for 60 seconds... 00:30:14.961 04:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:14.961 04:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:14.961 04:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:14.961 04:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:14.961 04:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:14.961 04:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:14.961 04:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:14.961 04:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:14.961 04:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:14.961 04:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:14.961 04:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:14.961 04:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:15.219 04:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:15.219 "name": "raid_bdev1", 00:30:15.219 "uuid": "5109cadf-7d68-46f1-b6f3-22748f641575", 00:30:15.219 "strip_size_kb": 0, 00:30:15.219 "state": "online", 00:30:15.219 "raid_level": "raid1", 00:30:15.219 "superblock": false, 00:30:15.219 "num_base_bdevs": 4, 00:30:15.219 "num_base_bdevs_discovered": 3, 00:30:15.219 "num_base_bdevs_operational": 3, 00:30:15.219 "base_bdevs_list": [ 00:30:15.219 { 00:30:15.219 "name": null, 00:30:15.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:15.219 "is_configured": false, 00:30:15.219 "data_offset": 0, 00:30:15.219 "data_size": 65536 00:30:15.219 }, 00:30:15.219 { 00:30:15.219 "name": "BaseBdev2", 00:30:15.219 "uuid": "21b31039-ebcc-5854-8472-6d2f8ff3f2d3", 00:30:15.219 "is_configured": true, 00:30:15.219 "data_offset": 0, 00:30:15.219 "data_size": 65536 00:30:15.219 }, 00:30:15.219 { 00:30:15.219 "name": "BaseBdev3", 00:30:15.219 "uuid": "a78b5826-f08c-5b4b-a136-d122d394130e", 00:30:15.219 "is_configured": true, 00:30:15.219 "data_offset": 0, 00:30:15.219 "data_size": 65536 00:30:15.219 }, 00:30:15.219 { 00:30:15.219 "name": "BaseBdev4", 00:30:15.219 "uuid": "e87e63f5-343e-5f84-8f03-17724672be67", 00:30:15.219 "is_configured": true, 00:30:15.219 "data_offset": 0, 00:30:15.219 "data_size": 65536 00:30:15.219 } 00:30:15.219 ] 00:30:15.219 }' 00:30:15.219 04:25:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:15.219 04:25:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:15.785 04:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:16.043 [2024-07-23 04:25:24.684272] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:16.043 04:25:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:30:16.043 [2024-07-23 04:25:24.775402] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:30:16.043 [2024-07-23 04:25:24.777841] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:16.301 [2024-07-23 04:25:24.886695] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:16.301 [2024-07-23 04:25:24.887038] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:16.559 [2024-07-23 04:25:25.124928] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:16.818 [2024-07-23 04:25:25.519451] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:16.818 [2024-07-23 04:25:25.520128] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:17.077 04:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:17.077 04:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:17.077 04:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:17.077 04:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:17.077 04:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:17.077 04:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:17.077 04:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:17.077 [2024-07-23 04:25:25.846298] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:17.373 [2024-07-23 04:25:25.955382] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:17.373 [2024-07-23 04:25:25.956012] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:17.373 04:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:17.373 "name": "raid_bdev1", 00:30:17.373 "uuid": "5109cadf-7d68-46f1-b6f3-22748f641575", 00:30:17.373 "strip_size_kb": 0, 00:30:17.373 "state": "online", 00:30:17.373 "raid_level": "raid1", 00:30:17.373 "superblock": false, 00:30:17.373 "num_base_bdevs": 4, 00:30:17.373 "num_base_bdevs_discovered": 4, 00:30:17.373 "num_base_bdevs_operational": 4, 00:30:17.373 "process": { 00:30:17.373 "type": "rebuild", 00:30:17.373 "target": "spare", 00:30:17.373 "progress": { 00:30:17.373 "blocks": 16384, 00:30:17.373 "percent": 25 00:30:17.373 } 00:30:17.373 }, 00:30:17.373 "base_bdevs_list": [ 00:30:17.373 { 00:30:17.373 "name": "spare", 00:30:17.373 "uuid": "d697b29a-b52f-5d19-bc49-b3bb3ba1494d", 00:30:17.373 "is_configured": true, 00:30:17.373 "data_offset": 0, 00:30:17.373 "data_size": 65536 00:30:17.373 }, 00:30:17.373 { 00:30:17.373 "name": "BaseBdev2", 00:30:17.373 "uuid": "21b31039-ebcc-5854-8472-6d2f8ff3f2d3", 00:30:17.373 "is_configured": true, 00:30:17.373 "data_offset": 0, 00:30:17.373 "data_size": 65536 00:30:17.373 }, 00:30:17.373 { 00:30:17.373 "name": "BaseBdev3", 00:30:17.373 "uuid": "a78b5826-f08c-5b4b-a136-d122d394130e", 00:30:17.373 "is_configured": true, 00:30:17.373 "data_offset": 0, 00:30:17.373 "data_size": 65536 00:30:17.373 }, 00:30:17.373 { 00:30:17.373 "name": "BaseBdev4", 00:30:17.373 "uuid": "e87e63f5-343e-5f84-8f03-17724672be67", 00:30:17.373 "is_configured": true, 00:30:17.373 "data_offset": 0, 00:30:17.373 "data_size": 65536 00:30:17.373 } 00:30:17.373 ] 00:30:17.373 }' 00:30:17.373 04:25:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:17.373 04:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:17.373 04:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:17.373 04:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:17.373 04:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:17.632 [2024-07-23 04:25:26.289270] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:17.632 [2024-07-23 04:25:26.409818] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:17.890 [2024-07-23 04:25:26.429102] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:17.890 [2024-07-23 04:25:26.429156] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:17.890 [2024-07-23 04:25:26.429172] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:17.890 [2024-07-23 04:25:26.457428] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000010a50 00:30:17.890 04:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:17.890 04:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:17.890 04:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:17.890 04:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:17.890 04:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:17.890 04:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:17.890 04:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:17.890 04:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:17.890 04:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:17.890 04:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:17.890 04:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:17.890 04:25:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:18.457 04:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:18.457 "name": "raid_bdev1", 00:30:18.457 "uuid": "5109cadf-7d68-46f1-b6f3-22748f641575", 00:30:18.457 "strip_size_kb": 0, 00:30:18.457 "state": "online", 00:30:18.457 "raid_level": "raid1", 00:30:18.457 "superblock": false, 00:30:18.457 "num_base_bdevs": 4, 00:30:18.457 "num_base_bdevs_discovered": 3, 00:30:18.457 "num_base_bdevs_operational": 3, 00:30:18.457 "base_bdevs_list": [ 00:30:18.457 { 00:30:18.457 "name": null, 00:30:18.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:18.457 "is_configured": false, 00:30:18.457 "data_offset": 0, 00:30:18.457 "data_size": 65536 00:30:18.457 }, 00:30:18.457 { 00:30:18.457 "name": "BaseBdev2", 00:30:18.457 "uuid": "21b31039-ebcc-5854-8472-6d2f8ff3f2d3", 00:30:18.457 "is_configured": true, 00:30:18.457 "data_offset": 0, 00:30:18.457 "data_size": 65536 00:30:18.457 }, 00:30:18.457 { 00:30:18.457 "name": "BaseBdev3", 00:30:18.457 "uuid": "a78b5826-f08c-5b4b-a136-d122d394130e", 00:30:18.457 "is_configured": true, 00:30:18.457 "data_offset": 0, 00:30:18.457 "data_size": 65536 00:30:18.457 }, 00:30:18.457 { 00:30:18.457 "name": "BaseBdev4", 00:30:18.457 "uuid": "e87e63f5-343e-5f84-8f03-17724672be67", 00:30:18.457 "is_configured": true, 00:30:18.457 "data_offset": 0, 00:30:18.457 "data_size": 65536 00:30:18.457 } 00:30:18.457 ] 00:30:18.457 }' 00:30:18.457 04:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:18.457 04:25:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:19.024 04:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:19.024 04:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:19.024 04:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:19.024 04:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:19.024 04:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:19.024 04:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:19.024 04:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:19.282 04:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:19.282 "name": "raid_bdev1", 00:30:19.282 "uuid": "5109cadf-7d68-46f1-b6f3-22748f641575", 00:30:19.282 "strip_size_kb": 0, 00:30:19.282 "state": "online", 00:30:19.282 "raid_level": "raid1", 00:30:19.282 "superblock": false, 00:30:19.282 "num_base_bdevs": 4, 00:30:19.282 "num_base_bdevs_discovered": 3, 00:30:19.282 "num_base_bdevs_operational": 3, 00:30:19.282 "base_bdevs_list": [ 00:30:19.282 { 00:30:19.282 "name": null, 00:30:19.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:19.282 "is_configured": false, 00:30:19.282 "data_offset": 0, 00:30:19.282 "data_size": 65536 00:30:19.282 }, 00:30:19.282 { 00:30:19.282 "name": "BaseBdev2", 00:30:19.282 "uuid": "21b31039-ebcc-5854-8472-6d2f8ff3f2d3", 00:30:19.282 "is_configured": true, 00:30:19.282 "data_offset": 0, 00:30:19.282 "data_size": 65536 00:30:19.282 }, 00:30:19.282 { 00:30:19.282 "name": "BaseBdev3", 00:30:19.282 "uuid": "a78b5826-f08c-5b4b-a136-d122d394130e", 00:30:19.282 "is_configured": true, 00:30:19.282 "data_offset": 0, 00:30:19.282 "data_size": 65536 00:30:19.282 }, 00:30:19.282 { 00:30:19.282 "name": "BaseBdev4", 00:30:19.282 "uuid": "e87e63f5-343e-5f84-8f03-17724672be67", 00:30:19.282 "is_configured": true, 00:30:19.282 "data_offset": 0, 00:30:19.282 "data_size": 65536 00:30:19.282 } 00:30:19.282 ] 00:30:19.282 }' 00:30:19.282 04:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:19.282 04:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:19.282 04:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:19.282 04:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:19.282 04:25:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:19.540 [2024-07-23 04:25:28.157939] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:19.540 04:25:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:19.540 [2024-07-23 04:25:28.220445] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010bf0 00:30:19.540 [2024-07-23 04:25:28.222831] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:19.798 [2024-07-23 04:25:28.339236] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:19.798 [2024-07-23 04:25:28.340480] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:20.056 [2024-07-23 04:25:28.600577] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:20.056 [2024-07-23 04:25:28.600796] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:20.315 [2024-07-23 04:25:28.843252] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:20.315 [2024-07-23 04:25:28.843637] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:20.573 [2024-07-23 04:25:29.112943] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:20.573 04:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:20.573 04:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:20.573 04:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:20.573 04:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:20.573 04:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:20.573 04:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:20.573 04:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:20.831 [2024-07-23 04:25:29.496184] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:21.090 04:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:21.090 "name": "raid_bdev1", 00:30:21.090 "uuid": "5109cadf-7d68-46f1-b6f3-22748f641575", 00:30:21.090 "strip_size_kb": 0, 00:30:21.090 "state": "online", 00:30:21.090 "raid_level": "raid1", 00:30:21.090 "superblock": false, 00:30:21.090 "num_base_bdevs": 4, 00:30:21.090 "num_base_bdevs_discovered": 4, 00:30:21.090 "num_base_bdevs_operational": 4, 00:30:21.090 "process": { 00:30:21.090 "type": "rebuild", 00:30:21.090 "target": "spare", 00:30:21.090 "progress": { 00:30:21.090 "blocks": 18432, 00:30:21.090 "percent": 28 00:30:21.090 } 00:30:21.090 }, 00:30:21.090 "base_bdevs_list": [ 00:30:21.090 { 00:30:21.090 "name": "spare", 00:30:21.090 "uuid": "d697b29a-b52f-5d19-bc49-b3bb3ba1494d", 00:30:21.090 "is_configured": true, 00:30:21.090 "data_offset": 0, 00:30:21.090 "data_size": 65536 00:30:21.090 }, 00:30:21.090 { 00:30:21.090 "name": "BaseBdev2", 00:30:21.090 "uuid": "21b31039-ebcc-5854-8472-6d2f8ff3f2d3", 00:30:21.090 "is_configured": true, 00:30:21.090 "data_offset": 0, 00:30:21.090 "data_size": 65536 00:30:21.090 }, 00:30:21.090 { 00:30:21.090 "name": "BaseBdev3", 00:30:21.090 "uuid": "a78b5826-f08c-5b4b-a136-d122d394130e", 00:30:21.090 "is_configured": true, 00:30:21.090 "data_offset": 0, 00:30:21.090 "data_size": 65536 00:30:21.090 }, 00:30:21.090 { 00:30:21.090 "name": "BaseBdev4", 00:30:21.090 "uuid": "e87e63f5-343e-5f84-8f03-17724672be67", 00:30:21.090 "is_configured": true, 00:30:21.090 "data_offset": 0, 00:30:21.090 "data_size": 65536 00:30:21.090 } 00:30:21.090 ] 00:30:21.090 }' 00:30:21.090 04:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:21.090 04:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:21.090 04:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:21.090 [2024-07-23 04:25:29.784855] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:30:21.090 04:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:21.090 04:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:30:21.090 04:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:30:21.090 04:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:30:21.090 04:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:30:21.090 04:25:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:21.348 [2024-07-23 04:25:30.018227] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:30:21.348 [2024-07-23 04:25:30.029342] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:21.606 [2024-07-23 04:25:30.259967] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000010a50 00:30:21.607 [2024-07-23 04:25:30.260012] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000010bf0 00:30:21.607 04:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:30:21.607 04:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:30:21.607 04:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:21.607 04:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:21.607 04:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:21.607 04:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:21.607 04:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:21.607 04:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:21.607 04:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:22.174 [2024-07-23 04:25:30.730427] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:30:22.174 04:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:22.174 "name": "raid_bdev1", 00:30:22.174 "uuid": "5109cadf-7d68-46f1-b6f3-22748f641575", 00:30:22.174 "strip_size_kb": 0, 00:30:22.174 "state": "online", 00:30:22.174 "raid_level": "raid1", 00:30:22.174 "superblock": false, 00:30:22.174 "num_base_bdevs": 4, 00:30:22.174 "num_base_bdevs_discovered": 3, 00:30:22.174 "num_base_bdevs_operational": 3, 00:30:22.174 "process": { 00:30:22.174 "type": "rebuild", 00:30:22.174 "target": "spare", 00:30:22.174 "progress": { 00:30:22.174 "blocks": 32768, 00:30:22.174 "percent": 50 00:30:22.174 } 00:30:22.174 }, 00:30:22.174 "base_bdevs_list": [ 00:30:22.174 { 00:30:22.174 "name": "spare", 00:30:22.174 "uuid": "d697b29a-b52f-5d19-bc49-b3bb3ba1494d", 00:30:22.174 "is_configured": true, 00:30:22.174 "data_offset": 0, 00:30:22.174 "data_size": 65536 00:30:22.174 }, 00:30:22.174 { 00:30:22.174 "name": null, 00:30:22.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:22.174 "is_configured": false, 00:30:22.174 "data_offset": 0, 00:30:22.174 "data_size": 65536 00:30:22.174 }, 00:30:22.174 { 00:30:22.174 "name": "BaseBdev3", 00:30:22.174 "uuid": "a78b5826-f08c-5b4b-a136-d122d394130e", 00:30:22.174 "is_configured": true, 00:30:22.174 "data_offset": 0, 00:30:22.174 "data_size": 65536 00:30:22.174 }, 00:30:22.174 { 00:30:22.174 "name": "BaseBdev4", 00:30:22.174 "uuid": "e87e63f5-343e-5f84-8f03-17724672be67", 00:30:22.174 "is_configured": true, 00:30:22.174 "data_offset": 0, 00:30:22.174 "data_size": 65536 00:30:22.174 } 00:30:22.174 ] 00:30:22.174 }' 00:30:22.174 04:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:22.174 04:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:22.174 04:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:22.174 04:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:22.174 04:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=1016 00:30:22.174 04:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:22.174 04:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:22.174 04:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:22.174 04:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:22.174 04:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:22.174 04:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:22.174 04:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:22.174 04:25:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:22.433 [2024-07-23 04:25:30.960385] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:30:22.433 04:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:22.433 "name": "raid_bdev1", 00:30:22.433 "uuid": "5109cadf-7d68-46f1-b6f3-22748f641575", 00:30:22.433 "strip_size_kb": 0, 00:30:22.433 "state": "online", 00:30:22.433 "raid_level": "raid1", 00:30:22.433 "superblock": false, 00:30:22.433 "num_base_bdevs": 4, 00:30:22.433 "num_base_bdevs_discovered": 3, 00:30:22.433 "num_base_bdevs_operational": 3, 00:30:22.433 "process": { 00:30:22.433 "type": "rebuild", 00:30:22.433 "target": "spare", 00:30:22.433 "progress": { 00:30:22.433 "blocks": 36864, 00:30:22.433 "percent": 56 00:30:22.433 } 00:30:22.433 }, 00:30:22.433 "base_bdevs_list": [ 00:30:22.433 { 00:30:22.433 "name": "spare", 00:30:22.433 "uuid": "d697b29a-b52f-5d19-bc49-b3bb3ba1494d", 00:30:22.433 "is_configured": true, 00:30:22.433 "data_offset": 0, 00:30:22.433 "data_size": 65536 00:30:22.433 }, 00:30:22.433 { 00:30:22.433 "name": null, 00:30:22.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:22.433 "is_configured": false, 00:30:22.433 "data_offset": 0, 00:30:22.433 "data_size": 65536 00:30:22.433 }, 00:30:22.433 { 00:30:22.433 "name": "BaseBdev3", 00:30:22.433 "uuid": "a78b5826-f08c-5b4b-a136-d122d394130e", 00:30:22.433 "is_configured": true, 00:30:22.433 "data_offset": 0, 00:30:22.433 "data_size": 65536 00:30:22.434 }, 00:30:22.434 { 00:30:22.434 "name": "BaseBdev4", 00:30:22.434 "uuid": "e87e63f5-343e-5f84-8f03-17724672be67", 00:30:22.434 "is_configured": true, 00:30:22.434 "data_offset": 0, 00:30:22.434 "data_size": 65536 00:30:22.434 } 00:30:22.434 ] 00:30:22.434 }' 00:30:22.434 04:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:22.434 [2024-07-23 04:25:31.198687] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:30:22.692 04:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:22.692 04:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:22.692 04:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:22.692 04:25:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:22.692 [2024-07-23 04:25:31.402379] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:30:22.692 [2024-07-23 04:25:31.402562] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:30:22.951 [2024-07-23 04:25:31.659331] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:30:23.520 04:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:23.520 04:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:23.520 04:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:23.520 04:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:23.520 04:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:23.520 04:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:23.520 04:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:23.520 04:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:23.778 04:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:23.778 "name": "raid_bdev1", 00:30:23.778 "uuid": "5109cadf-7d68-46f1-b6f3-22748f641575", 00:30:23.778 "strip_size_kb": 0, 00:30:23.778 "state": "online", 00:30:23.778 "raid_level": "raid1", 00:30:23.778 "superblock": false, 00:30:23.778 "num_base_bdevs": 4, 00:30:23.778 "num_base_bdevs_discovered": 3, 00:30:23.778 "num_base_bdevs_operational": 3, 00:30:23.778 "process": { 00:30:23.778 "type": "rebuild", 00:30:23.778 "target": "spare", 00:30:23.778 "progress": { 00:30:23.778 "blocks": 57344, 00:30:23.778 "percent": 87 00:30:23.778 } 00:30:23.778 }, 00:30:23.778 "base_bdevs_list": [ 00:30:23.778 { 00:30:23.778 "name": "spare", 00:30:23.778 "uuid": "d697b29a-b52f-5d19-bc49-b3bb3ba1494d", 00:30:23.778 "is_configured": true, 00:30:23.778 "data_offset": 0, 00:30:23.778 "data_size": 65536 00:30:23.778 }, 00:30:23.778 { 00:30:23.778 "name": null, 00:30:23.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:23.778 "is_configured": false, 00:30:23.778 "data_offset": 0, 00:30:23.778 "data_size": 65536 00:30:23.778 }, 00:30:23.778 { 00:30:23.778 "name": "BaseBdev3", 00:30:23.778 "uuid": "a78b5826-f08c-5b4b-a136-d122d394130e", 00:30:23.778 "is_configured": true, 00:30:23.778 "data_offset": 0, 00:30:23.778 "data_size": 65536 00:30:23.778 }, 00:30:23.778 { 00:30:23.778 "name": "BaseBdev4", 00:30:23.778 "uuid": "e87e63f5-343e-5f84-8f03-17724672be67", 00:30:23.778 "is_configured": true, 00:30:23.779 "data_offset": 0, 00:30:23.779 "data_size": 65536 00:30:23.779 } 00:30:23.779 ] 00:30:23.779 }' 00:30:23.779 04:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:23.779 04:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:23.779 04:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:23.779 [2024-07-23 04:25:32.547111] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:30:24.038 04:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:24.038 04:25:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:24.297 [2024-07-23 04:25:32.998225] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:24.556 [2024-07-23 04:25:33.106130] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:24.556 [2024-07-23 04:25:33.107946] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:24.815 04:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:24.815 04:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:24.815 04:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:24.815 04:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:24.815 04:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:24.815 04:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:24.815 04:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:24.815 04:25:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:25.383 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:25.383 "name": "raid_bdev1", 00:30:25.383 "uuid": "5109cadf-7d68-46f1-b6f3-22748f641575", 00:30:25.383 "strip_size_kb": 0, 00:30:25.383 "state": "online", 00:30:25.383 "raid_level": "raid1", 00:30:25.383 "superblock": false, 00:30:25.383 "num_base_bdevs": 4, 00:30:25.383 "num_base_bdevs_discovered": 3, 00:30:25.383 "num_base_bdevs_operational": 3, 00:30:25.383 "base_bdevs_list": [ 00:30:25.383 { 00:30:25.383 "name": "spare", 00:30:25.383 "uuid": "d697b29a-b52f-5d19-bc49-b3bb3ba1494d", 00:30:25.383 "is_configured": true, 00:30:25.383 "data_offset": 0, 00:30:25.383 "data_size": 65536 00:30:25.383 }, 00:30:25.383 { 00:30:25.383 "name": null, 00:30:25.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.383 "is_configured": false, 00:30:25.383 "data_offset": 0, 00:30:25.383 "data_size": 65536 00:30:25.383 }, 00:30:25.383 { 00:30:25.383 "name": "BaseBdev3", 00:30:25.383 "uuid": "a78b5826-f08c-5b4b-a136-d122d394130e", 00:30:25.383 "is_configured": true, 00:30:25.383 "data_offset": 0, 00:30:25.383 "data_size": 65536 00:30:25.383 }, 00:30:25.383 { 00:30:25.383 "name": "BaseBdev4", 00:30:25.383 "uuid": "e87e63f5-343e-5f84-8f03-17724672be67", 00:30:25.383 "is_configured": true, 00:30:25.383 "data_offset": 0, 00:30:25.383 "data_size": 65536 00:30:25.383 } 00:30:25.383 ] 00:30:25.383 }' 00:30:25.383 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:25.383 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:25.383 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:25.642 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:25.642 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:30:25.642 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:25.642 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:25.642 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:25.642 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:25.642 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:25.642 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:25.642 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:25.901 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:25.901 "name": "raid_bdev1", 00:30:25.901 "uuid": "5109cadf-7d68-46f1-b6f3-22748f641575", 00:30:25.901 "strip_size_kb": 0, 00:30:25.901 "state": "online", 00:30:25.901 "raid_level": "raid1", 00:30:25.901 "superblock": false, 00:30:25.901 "num_base_bdevs": 4, 00:30:25.901 "num_base_bdevs_discovered": 3, 00:30:25.901 "num_base_bdevs_operational": 3, 00:30:25.901 "base_bdevs_list": [ 00:30:25.901 { 00:30:25.901 "name": "spare", 00:30:25.901 "uuid": "d697b29a-b52f-5d19-bc49-b3bb3ba1494d", 00:30:25.901 "is_configured": true, 00:30:25.901 "data_offset": 0, 00:30:25.901 "data_size": 65536 00:30:25.901 }, 00:30:25.901 { 00:30:25.901 "name": null, 00:30:25.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.901 "is_configured": false, 00:30:25.901 "data_offset": 0, 00:30:25.901 "data_size": 65536 00:30:25.901 }, 00:30:25.901 { 00:30:25.901 "name": "BaseBdev3", 00:30:25.901 "uuid": "a78b5826-f08c-5b4b-a136-d122d394130e", 00:30:25.901 "is_configured": true, 00:30:25.901 "data_offset": 0, 00:30:25.901 "data_size": 65536 00:30:25.901 }, 00:30:25.901 { 00:30:25.901 "name": "BaseBdev4", 00:30:25.901 "uuid": "e87e63f5-343e-5f84-8f03-17724672be67", 00:30:25.901 "is_configured": true, 00:30:25.901 "data_offset": 0, 00:30:25.901 "data_size": 65536 00:30:25.901 } 00:30:25.901 ] 00:30:25.901 }' 00:30:25.901 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:25.901 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:25.901 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:25.901 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:25.901 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:25.901 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:25.901 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:25.901 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:25.901 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:25.901 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:25.901 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:25.901 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:25.901 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:25.901 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:25.901 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:25.901 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:26.160 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:26.160 "name": "raid_bdev1", 00:30:26.160 "uuid": "5109cadf-7d68-46f1-b6f3-22748f641575", 00:30:26.160 "strip_size_kb": 0, 00:30:26.160 "state": "online", 00:30:26.160 "raid_level": "raid1", 00:30:26.160 "superblock": false, 00:30:26.160 "num_base_bdevs": 4, 00:30:26.160 "num_base_bdevs_discovered": 3, 00:30:26.160 "num_base_bdevs_operational": 3, 00:30:26.160 "base_bdevs_list": [ 00:30:26.160 { 00:30:26.160 "name": "spare", 00:30:26.160 "uuid": "d697b29a-b52f-5d19-bc49-b3bb3ba1494d", 00:30:26.160 "is_configured": true, 00:30:26.160 "data_offset": 0, 00:30:26.160 "data_size": 65536 00:30:26.160 }, 00:30:26.160 { 00:30:26.160 "name": null, 00:30:26.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.160 "is_configured": false, 00:30:26.160 "data_offset": 0, 00:30:26.160 "data_size": 65536 00:30:26.160 }, 00:30:26.160 { 00:30:26.160 "name": "BaseBdev3", 00:30:26.160 "uuid": "a78b5826-f08c-5b4b-a136-d122d394130e", 00:30:26.160 "is_configured": true, 00:30:26.160 "data_offset": 0, 00:30:26.160 "data_size": 65536 00:30:26.161 }, 00:30:26.161 { 00:30:26.161 "name": "BaseBdev4", 00:30:26.161 "uuid": "e87e63f5-343e-5f84-8f03-17724672be67", 00:30:26.161 "is_configured": true, 00:30:26.161 "data_offset": 0, 00:30:26.161 "data_size": 65536 00:30:26.161 } 00:30:26.161 ] 00:30:26.161 }' 00:30:26.161 04:25:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:26.161 04:25:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:26.729 04:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:26.988 [2024-07-23 04:25:35.554023] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:26.988 [2024-07-23 04:25:35.554068] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:26.988 00:30:26.988 Latency(us) 00:30:26.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.988 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:30:26.988 raid_bdev1 : 11.94 94.25 282.74 0.00 0.00 14296.22 337.51 121634.82 00:30:26.988 =================================================================================================================== 00:30:26.988 Total : 94.25 282.74 0.00 0.00 14296.22 337.51 121634.82 00:30:26.988 [2024-07-23 04:25:35.675341] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:26.988 [2024-07-23 04:25:35.675391] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:26.988 [2024-07-23 04:25:35.675504] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:26.988 [2024-07-23 04:25:35.675524] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:30:26.988 0 00:30:26.988 04:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.988 04:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:30:27.246 04:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:30:27.246 04:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:30:27.246 04:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:30:27.246 04:25:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:30:27.246 04:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:27.246 04:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:30:27.246 04:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:27.246 04:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:27.246 04:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:27.246 04:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:27.246 04:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:27.246 04:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:27.246 04:25:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:30:27.505 /dev/nbd0 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:27.505 1+0 records in 00:30:27.505 1+0 records out 00:30:27.505 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286522 s, 14.3 MB/s 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # continue 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:27.505 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:30:27.764 /dev/nbd1 00:30:27.764 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:27.764 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:27.764 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:30:27.764 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:30:27.764 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:27.764 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:27.764 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:30:27.764 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:30:27.764 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:27.764 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:27.764 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:27.764 1+0 records in 00:30:27.764 1+0 records out 00:30:27.764 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296293 s, 13.8 MB/s 00:30:27.764 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:27.764 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:30:27.764 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:27.764 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:27.764 04:25:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:30:27.764 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:27.764 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:27.764 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:28.022 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:30:28.022 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:28.022 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:28.022 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:28.022 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:28.022 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:28.022 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:28.280 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:28.280 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:28.280 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:28.280 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:28.280 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:28.280 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:28.280 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:28.280 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:28.280 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:30:28.280 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:30:28.280 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:30:28.280 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:28.280 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:30:28.280 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:28.280 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:28.280 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:28.280 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:28.280 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:28.280 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:28.280 04:25:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:30:28.539 /dev/nbd1 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:28.539 1+0 records in 00:30:28.539 1+0 records out 00:30:28.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198137 s, 20.7 MB/s 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:28.539 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:28.797 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:28.797 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:28.797 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:28.797 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:28.797 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:28.797 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:28.797 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:28.797 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:28.797 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:28.797 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:28.797 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:28.797 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:28.797 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:28.797 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:28.797 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:29.055 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:29.055 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:29.055 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:29.055 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:29.055 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:29.055 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:29.055 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:29.055 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:29.055 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:30:29.055 04:25:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 2792403 00:30:29.055 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 2792403 ']' 00:30:29.055 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 2792403 00:30:29.055 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:30:29.055 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:29.055 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2792403 00:30:29.314 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:29.314 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:29.314 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2792403' 00:30:29.314 killing process with pid 2792403 00:30:29.314 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 2792403 00:30:29.314 Received shutdown signal, test time was about 14.135768 seconds 00:30:29.314 00:30:29.314 Latency(us) 00:30:29.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.314 =================================================================================================================== 00:30:29.314 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:29.314 [2024-07-23 04:25:37.850761] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:29.314 04:25:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 2792403 00:30:29.881 [2024-07-23 04:25:38.368681] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:30:31.809 00:30:31.809 real 0m22.316s 00:30:31.809 user 0m33.421s 00:30:31.809 sys 0m3.467s 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:31.809 ************************************ 00:30:31.809 END TEST raid_rebuild_test_io 00:30:31.809 ************************************ 00:30:31.809 04:25:40 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:30:31.809 04:25:40 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:30:31.809 04:25:40 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:30:31.809 04:25:40 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:31.809 04:25:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:31.809 ************************************ 00:30:31.809 START TEST raid_rebuild_test_sb_io 00:30:31.809 ************************************ 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true true true 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2796331 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2796331 /var/tmp/spdk-raid.sock 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 2796331 ']' 00:30:31.809 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:31.810 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:31.810 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:31.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:31.810 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:31.810 04:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:31.810 [2024-07-23 04:25:40.435813] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:31.810 [2024-07-23 04:25:40.435941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2796331 ] 00:30:31.810 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:31.810 Zero copy mechanism will not be used. 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:31.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:31.810 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:32.068 [2024-07-23 04:25:40.663285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.325 [2024-07-23 04:25:40.951447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.582 [2024-07-23 04:25:41.303875] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:32.582 [2024-07-23 04:25:41.303917] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:32.840 04:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:32.840 04:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:30:32.840 04:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:32.840 04:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:33.099 BaseBdev1_malloc 00:30:33.099 04:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:33.357 [2024-07-23 04:25:41.977098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:33.357 [2024-07-23 04:25:41.977180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:33.357 [2024-07-23 04:25:41.977212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:30:33.357 [2024-07-23 04:25:41.977232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:33.357 [2024-07-23 04:25:41.980027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:33.357 [2024-07-23 04:25:41.980069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:33.357 BaseBdev1 00:30:33.357 04:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:33.357 04:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:33.616 BaseBdev2_malloc 00:30:33.616 04:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:33.874 [2024-07-23 04:25:42.484209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:33.874 [2024-07-23 04:25:42.484271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:33.874 [2024-07-23 04:25:42.484298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:30:33.874 [2024-07-23 04:25:42.484319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:33.874 [2024-07-23 04:25:42.487045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:33.874 [2024-07-23 04:25:42.487081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:33.874 BaseBdev2 00:30:33.874 04:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:33.874 04:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:34.132 BaseBdev3_malloc 00:30:34.132 04:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:34.391 [2024-07-23 04:25:42.975946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:34.391 [2024-07-23 04:25:42.976010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:34.391 [2024-07-23 04:25:42.976039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:30:34.391 [2024-07-23 04:25:42.976058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:34.391 [2024-07-23 04:25:42.978749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:34.391 [2024-07-23 04:25:42.978785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:34.391 BaseBdev3 00:30:34.391 04:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:34.391 04:25:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:34.649 BaseBdev4_malloc 00:30:34.649 04:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:34.907 [2024-07-23 04:25:43.479802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:34.907 [2024-07-23 04:25:43.479864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:34.907 [2024-07-23 04:25:43.479889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:30:34.907 [2024-07-23 04:25:43.479908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:34.907 [2024-07-23 04:25:43.482632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:34.907 [2024-07-23 04:25:43.482669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:34.907 BaseBdev4 00:30:34.907 04:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:30:35.165 spare_malloc 00:30:35.166 04:25:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:35.424 spare_delay 00:30:35.424 04:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:35.683 [2024-07-23 04:25:44.216063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:35.683 [2024-07-23 04:25:44.216119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:35.683 [2024-07-23 04:25:44.216152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:30:35.683 [2024-07-23 04:25:44.216171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:35.683 [2024-07-23 04:25:44.218901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:35.683 [2024-07-23 04:25:44.218937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:35.683 spare 00:30:35.683 04:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:30:35.683 [2024-07-23 04:25:44.436724] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:35.683 [2024-07-23 04:25:44.438999] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:35.683 [2024-07-23 04:25:44.439069] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:35.683 [2024-07-23 04:25:44.439135] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:35.683 [2024-07-23 04:25:44.439392] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:30:35.683 [2024-07-23 04:25:44.439415] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:35.683 [2024-07-23 04:25:44.439743] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:30:35.683 [2024-07-23 04:25:44.439997] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:30:35.683 [2024-07-23 04:25:44.440013] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:30:35.683 [2024-07-23 04:25:44.440219] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:35.683 04:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:35.683 04:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:35.683 04:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:35.683 04:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:35.683 04:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:35.683 04:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:35.683 04:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:35.683 04:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:35.683 04:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:35.683 04:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:35.683 04:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:35.683 04:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:35.941 04:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:35.941 "name": "raid_bdev1", 00:30:35.941 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:30:35.941 "strip_size_kb": 0, 00:30:35.941 "state": "online", 00:30:35.941 "raid_level": "raid1", 00:30:35.941 "superblock": true, 00:30:35.941 "num_base_bdevs": 4, 00:30:35.941 "num_base_bdevs_discovered": 4, 00:30:35.941 "num_base_bdevs_operational": 4, 00:30:35.941 "base_bdevs_list": [ 00:30:35.941 { 00:30:35.941 "name": "BaseBdev1", 00:30:35.941 "uuid": "ea4bd646-6e95-5a3a-90b7-98be7c7bb04d", 00:30:35.941 "is_configured": true, 00:30:35.941 "data_offset": 2048, 00:30:35.941 "data_size": 63488 00:30:35.941 }, 00:30:35.941 { 00:30:35.941 "name": "BaseBdev2", 00:30:35.941 "uuid": "c94030c3-5f32-5911-9af7-1f32f00631ac", 00:30:35.941 "is_configured": true, 00:30:35.941 "data_offset": 2048, 00:30:35.941 "data_size": 63488 00:30:35.941 }, 00:30:35.941 { 00:30:35.941 "name": "BaseBdev3", 00:30:35.941 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:30:35.941 "is_configured": true, 00:30:35.941 "data_offset": 2048, 00:30:35.941 "data_size": 63488 00:30:35.941 }, 00:30:35.941 { 00:30:35.941 "name": "BaseBdev4", 00:30:35.941 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:30:35.941 "is_configured": true, 00:30:35.941 "data_offset": 2048, 00:30:35.941 "data_size": 63488 00:30:35.941 } 00:30:35.941 ] 00:30:35.941 }' 00:30:35.941 04:25:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:35.941 04:25:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:36.508 04:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:36.508 04:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:30:36.766 [2024-07-23 04:25:45.395710] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:36.766 04:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:30:36.766 04:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:36.766 04:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:37.025 04:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:30:37.025 04:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:30:37.025 04:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:30:37.025 04:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:37.025 [2024-07-23 04:25:45.744703] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010a50 00:30:37.025 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:37.025 Zero copy mechanism will not be used. 00:30:37.025 Running I/O for 60 seconds... 00:30:37.284 [2024-07-23 04:25:45.843077] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:37.284 [2024-07-23 04:25:45.851043] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000010a50 00:30:37.284 04:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:37.284 04:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:37.284 04:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:37.284 04:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:37.284 04:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:37.284 04:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:37.284 04:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:37.284 04:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:37.284 04:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:37.284 04:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:37.284 04:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:37.284 04:25:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:37.284 04:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:37.284 "name": "raid_bdev1", 00:30:37.284 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:30:37.284 "strip_size_kb": 0, 00:30:37.284 "state": "online", 00:30:37.284 "raid_level": "raid1", 00:30:37.284 "superblock": true, 00:30:37.284 "num_base_bdevs": 4, 00:30:37.284 "num_base_bdevs_discovered": 3, 00:30:37.284 "num_base_bdevs_operational": 3, 00:30:37.284 "base_bdevs_list": [ 00:30:37.284 { 00:30:37.284 "name": null, 00:30:37.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:37.284 "is_configured": false, 00:30:37.284 "data_offset": 2048, 00:30:37.284 "data_size": 63488 00:30:37.284 }, 00:30:37.284 { 00:30:37.284 "name": "BaseBdev2", 00:30:37.284 "uuid": "c94030c3-5f32-5911-9af7-1f32f00631ac", 00:30:37.284 "is_configured": true, 00:30:37.284 "data_offset": 2048, 00:30:37.284 "data_size": 63488 00:30:37.284 }, 00:30:37.284 { 00:30:37.284 "name": "BaseBdev3", 00:30:37.284 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:30:37.284 "is_configured": true, 00:30:37.284 "data_offset": 2048, 00:30:37.284 "data_size": 63488 00:30:37.284 }, 00:30:37.284 { 00:30:37.284 "name": "BaseBdev4", 00:30:37.284 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:30:37.284 "is_configured": true, 00:30:37.284 "data_offset": 2048, 00:30:37.284 "data_size": 63488 00:30:37.284 } 00:30:37.284 ] 00:30:37.284 }' 00:30:37.285 04:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:37.285 04:25:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:37.851 04:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:38.111 [2024-07-23 04:25:46.812997] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:38.370 04:25:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:30:38.370 [2024-07-23 04:25:46.905241] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:30:38.370 [2024-07-23 04:25:46.907714] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:38.370 [2024-07-23 04:25:47.026303] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:38.370 [2024-07-23 04:25:47.027549] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:38.629 [2024-07-23 04:25:47.251387] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:38.629 [2024-07-23 04:25:47.251584] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:39.198 [2024-07-23 04:25:47.731415] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:39.198 04:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:39.198 04:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:39.198 04:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:39.198 04:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:39.198 04:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:39.198 04:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:39.198 04:25:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:39.457 [2024-07-23 04:25:48.088726] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:39.457 04:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:39.457 "name": "raid_bdev1", 00:30:39.457 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:30:39.457 "strip_size_kb": 0, 00:30:39.457 "state": "online", 00:30:39.457 "raid_level": "raid1", 00:30:39.457 "superblock": true, 00:30:39.457 "num_base_bdevs": 4, 00:30:39.457 "num_base_bdevs_discovered": 4, 00:30:39.457 "num_base_bdevs_operational": 4, 00:30:39.457 "process": { 00:30:39.457 "type": "rebuild", 00:30:39.457 "target": "spare", 00:30:39.457 "progress": { 00:30:39.457 "blocks": 14336, 00:30:39.457 "percent": 22 00:30:39.457 } 00:30:39.457 }, 00:30:39.457 "base_bdevs_list": [ 00:30:39.457 { 00:30:39.457 "name": "spare", 00:30:39.457 "uuid": "adc88d25-4fff-5e8c-a86a-0e4ddaa0676a", 00:30:39.457 "is_configured": true, 00:30:39.457 "data_offset": 2048, 00:30:39.457 "data_size": 63488 00:30:39.457 }, 00:30:39.457 { 00:30:39.457 "name": "BaseBdev2", 00:30:39.457 "uuid": "c94030c3-5f32-5911-9af7-1f32f00631ac", 00:30:39.457 "is_configured": true, 00:30:39.457 "data_offset": 2048, 00:30:39.457 "data_size": 63488 00:30:39.457 }, 00:30:39.457 { 00:30:39.457 "name": "BaseBdev3", 00:30:39.457 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:30:39.457 "is_configured": true, 00:30:39.457 "data_offset": 2048, 00:30:39.457 "data_size": 63488 00:30:39.457 }, 00:30:39.457 { 00:30:39.457 "name": "BaseBdev4", 00:30:39.457 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:30:39.457 "is_configured": true, 00:30:39.457 "data_offset": 2048, 00:30:39.457 "data_size": 63488 00:30:39.457 } 00:30:39.457 ] 00:30:39.457 }' 00:30:39.457 04:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:39.457 04:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:39.457 04:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:39.457 04:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:39.457 04:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:39.716 [2024-07-23 04:25:48.327925] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:39.716 [2024-07-23 04:25:48.414003] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:39.716 [2024-07-23 04:25:48.440908] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:39.716 [2024-07-23 04:25:48.468436] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:39.716 [2024-07-23 04:25:48.480769] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:39.716 [2024-07-23 04:25:48.480816] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:39.716 [2024-07-23 04:25:48.480834] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:39.975 [2024-07-23 04:25:48.516709] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000010a50 00:30:39.975 04:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:39.975 04:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:39.975 04:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:39.975 04:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:39.976 04:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:39.976 04:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:39.976 04:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:39.976 04:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:39.976 04:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:39.976 04:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:39.976 04:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:39.976 04:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:40.235 04:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:40.235 "name": "raid_bdev1", 00:30:40.235 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:30:40.235 "strip_size_kb": 0, 00:30:40.235 "state": "online", 00:30:40.235 "raid_level": "raid1", 00:30:40.235 "superblock": true, 00:30:40.235 "num_base_bdevs": 4, 00:30:40.235 "num_base_bdevs_discovered": 3, 00:30:40.235 "num_base_bdevs_operational": 3, 00:30:40.235 "base_bdevs_list": [ 00:30:40.235 { 00:30:40.235 "name": null, 00:30:40.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.235 "is_configured": false, 00:30:40.235 "data_offset": 2048, 00:30:40.235 "data_size": 63488 00:30:40.235 }, 00:30:40.235 { 00:30:40.235 "name": "BaseBdev2", 00:30:40.235 "uuid": "c94030c3-5f32-5911-9af7-1f32f00631ac", 00:30:40.235 "is_configured": true, 00:30:40.235 "data_offset": 2048, 00:30:40.235 "data_size": 63488 00:30:40.235 }, 00:30:40.235 { 00:30:40.235 "name": "BaseBdev3", 00:30:40.235 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:30:40.235 "is_configured": true, 00:30:40.235 "data_offset": 2048, 00:30:40.235 "data_size": 63488 00:30:40.235 }, 00:30:40.235 { 00:30:40.235 "name": "BaseBdev4", 00:30:40.235 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:30:40.235 "is_configured": true, 00:30:40.235 "data_offset": 2048, 00:30:40.235 "data_size": 63488 00:30:40.235 } 00:30:40.235 ] 00:30:40.235 }' 00:30:40.235 04:25:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:40.235 04:25:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:40.804 04:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:40.804 04:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:40.804 04:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:40.804 04:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:40.804 04:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:40.804 04:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:40.804 04:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:41.372 04:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:41.372 "name": "raid_bdev1", 00:30:41.372 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:30:41.372 "strip_size_kb": 0, 00:30:41.372 "state": "online", 00:30:41.372 "raid_level": "raid1", 00:30:41.372 "superblock": true, 00:30:41.372 "num_base_bdevs": 4, 00:30:41.372 "num_base_bdevs_discovered": 3, 00:30:41.372 "num_base_bdevs_operational": 3, 00:30:41.372 "base_bdevs_list": [ 00:30:41.372 { 00:30:41.372 "name": null, 00:30:41.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:41.372 "is_configured": false, 00:30:41.372 "data_offset": 2048, 00:30:41.372 "data_size": 63488 00:30:41.372 }, 00:30:41.372 { 00:30:41.372 "name": "BaseBdev2", 00:30:41.372 "uuid": "c94030c3-5f32-5911-9af7-1f32f00631ac", 00:30:41.372 "is_configured": true, 00:30:41.372 "data_offset": 2048, 00:30:41.372 "data_size": 63488 00:30:41.372 }, 00:30:41.372 { 00:30:41.372 "name": "BaseBdev3", 00:30:41.372 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:30:41.372 "is_configured": true, 00:30:41.372 "data_offset": 2048, 00:30:41.372 "data_size": 63488 00:30:41.372 }, 00:30:41.372 { 00:30:41.372 "name": "BaseBdev4", 00:30:41.372 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:30:41.372 "is_configured": true, 00:30:41.372 "data_offset": 2048, 00:30:41.372 "data_size": 63488 00:30:41.372 } 00:30:41.372 ] 00:30:41.372 }' 00:30:41.372 04:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:41.372 04:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:41.372 04:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:41.372 04:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:41.372 04:25:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:41.630 [2024-07-23 04:25:50.195342] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:41.630 [2024-07-23 04:25:50.259089] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010bf0 00:30:41.630 04:25:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:41.630 [2024-07-23 04:25:50.261545] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:41.630 [2024-07-23 04:25:50.382515] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:41.631 [2024-07-23 04:25:50.382865] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:41.889 [2024-07-23 04:25:50.513660] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:41.889 [2024-07-23 04:25:50.513941] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:42.147 [2024-07-23 04:25:50.886859] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:42.405 [2024-07-23 04:25:51.107819] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:42.405 [2024-07-23 04:25:51.108101] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:42.663 04:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:42.663 04:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:42.663 04:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:42.663 04:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:42.663 04:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:42.663 04:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:42.663 04:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:42.921 [2024-07-23 04:25:51.450217] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:42.921 [2024-07-23 04:25:51.451539] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:42.921 [2024-07-23 04:25:51.673157] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:43.179 04:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:43.179 "name": "raid_bdev1", 00:30:43.179 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:30:43.179 "strip_size_kb": 0, 00:30:43.179 "state": "online", 00:30:43.179 "raid_level": "raid1", 00:30:43.179 "superblock": true, 00:30:43.179 "num_base_bdevs": 4, 00:30:43.179 "num_base_bdevs_discovered": 4, 00:30:43.179 "num_base_bdevs_operational": 4, 00:30:43.179 "process": { 00:30:43.179 "type": "rebuild", 00:30:43.179 "target": "spare", 00:30:43.179 "progress": { 00:30:43.179 "blocks": 16384, 00:30:43.179 "percent": 25 00:30:43.179 } 00:30:43.179 }, 00:30:43.179 "base_bdevs_list": [ 00:30:43.179 { 00:30:43.179 "name": "spare", 00:30:43.179 "uuid": "adc88d25-4fff-5e8c-a86a-0e4ddaa0676a", 00:30:43.179 "is_configured": true, 00:30:43.179 "data_offset": 2048, 00:30:43.179 "data_size": 63488 00:30:43.179 }, 00:30:43.179 { 00:30:43.179 "name": "BaseBdev2", 00:30:43.179 "uuid": "c94030c3-5f32-5911-9af7-1f32f00631ac", 00:30:43.179 "is_configured": true, 00:30:43.179 "data_offset": 2048, 00:30:43.179 "data_size": 63488 00:30:43.179 }, 00:30:43.179 { 00:30:43.179 "name": "BaseBdev3", 00:30:43.179 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:30:43.179 "is_configured": true, 00:30:43.179 "data_offset": 2048, 00:30:43.179 "data_size": 63488 00:30:43.179 }, 00:30:43.179 { 00:30:43.179 "name": "BaseBdev4", 00:30:43.179 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:30:43.179 "is_configured": true, 00:30:43.179 "data_offset": 2048, 00:30:43.179 "data_size": 63488 00:30:43.179 } 00:30:43.179 ] 00:30:43.179 }' 00:30:43.179 04:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:43.179 04:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:43.179 04:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:43.179 04:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:43.179 04:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:30:43.179 04:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:30:43.179 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:30:43.179 04:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:30:43.179 04:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:30:43.179 04:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:30:43.179 04:25:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:43.438 [2024-07-23 04:25:52.070923] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:43.438 [2024-07-23 04:25:52.171311] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:30:43.696 [2024-07-23 04:25:52.381890] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000010a50 00:30:43.696 [2024-07-23 04:25:52.381927] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000010bf0 00:30:43.696 [2024-07-23 04:25:52.383648] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:30:43.696 [2024-07-23 04:25:52.393106] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:30:43.696 04:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:30:43.696 04:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:30:43.696 04:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:43.696 04:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:43.696 04:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:43.696 04:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:43.696 04:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:43.696 04:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:43.696 04:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:43.955 [2024-07-23 04:25:52.614604] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:30:43.955 [2024-07-23 04:25:52.725225] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:30:44.213 04:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:44.213 "name": "raid_bdev1", 00:30:44.213 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:30:44.213 "strip_size_kb": 0, 00:30:44.213 "state": "online", 00:30:44.213 "raid_level": "raid1", 00:30:44.213 "superblock": true, 00:30:44.213 "num_base_bdevs": 4, 00:30:44.213 "num_base_bdevs_discovered": 3, 00:30:44.213 "num_base_bdevs_operational": 3, 00:30:44.213 "process": { 00:30:44.213 "type": "rebuild", 00:30:44.213 "target": "spare", 00:30:44.213 "progress": { 00:30:44.213 "blocks": 30720, 00:30:44.213 "percent": 48 00:30:44.213 } 00:30:44.213 }, 00:30:44.213 "base_bdevs_list": [ 00:30:44.213 { 00:30:44.213 "name": "spare", 00:30:44.213 "uuid": "adc88d25-4fff-5e8c-a86a-0e4ddaa0676a", 00:30:44.213 "is_configured": true, 00:30:44.213 "data_offset": 2048, 00:30:44.213 "data_size": 63488 00:30:44.213 }, 00:30:44.213 { 00:30:44.213 "name": null, 00:30:44.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:44.213 "is_configured": false, 00:30:44.213 "data_offset": 2048, 00:30:44.213 "data_size": 63488 00:30:44.213 }, 00:30:44.213 { 00:30:44.213 "name": "BaseBdev3", 00:30:44.213 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:30:44.213 "is_configured": true, 00:30:44.213 "data_offset": 2048, 00:30:44.213 "data_size": 63488 00:30:44.213 }, 00:30:44.213 { 00:30:44.213 "name": "BaseBdev4", 00:30:44.213 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:30:44.213 "is_configured": true, 00:30:44.213 "data_offset": 2048, 00:30:44.213 "data_size": 63488 00:30:44.213 } 00:30:44.213 ] 00:30:44.213 }' 00:30:44.213 04:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:44.213 04:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:44.471 04:25:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:44.471 04:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:44.471 04:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=1039 00:30:44.471 04:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:44.471 04:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:44.471 04:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:44.471 04:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:44.471 04:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:44.471 04:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:44.471 04:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:44.471 04:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:44.471 [2024-07-23 04:25:53.083251] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:30:44.471 [2024-07-23 04:25:53.083707] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:30:44.730 04:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:44.730 "name": "raid_bdev1", 00:30:44.730 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:30:44.730 "strip_size_kb": 0, 00:30:44.730 "state": "online", 00:30:44.730 "raid_level": "raid1", 00:30:44.730 "superblock": true, 00:30:44.730 "num_base_bdevs": 4, 00:30:44.730 "num_base_bdevs_discovered": 3, 00:30:44.730 "num_base_bdevs_operational": 3, 00:30:44.730 "process": { 00:30:44.730 "type": "rebuild", 00:30:44.730 "target": "spare", 00:30:44.730 "progress": { 00:30:44.730 "blocks": 34816, 00:30:44.730 "percent": 54 00:30:44.730 } 00:30:44.730 }, 00:30:44.730 "base_bdevs_list": [ 00:30:44.730 { 00:30:44.730 "name": "spare", 00:30:44.730 "uuid": "adc88d25-4fff-5e8c-a86a-0e4ddaa0676a", 00:30:44.730 "is_configured": true, 00:30:44.730 "data_offset": 2048, 00:30:44.730 "data_size": 63488 00:30:44.730 }, 00:30:44.730 { 00:30:44.730 "name": null, 00:30:44.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:44.730 "is_configured": false, 00:30:44.730 "data_offset": 2048, 00:30:44.730 "data_size": 63488 00:30:44.730 }, 00:30:44.730 { 00:30:44.730 "name": "BaseBdev3", 00:30:44.730 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:30:44.730 "is_configured": true, 00:30:44.730 "data_offset": 2048, 00:30:44.730 "data_size": 63488 00:30:44.730 }, 00:30:44.730 { 00:30:44.730 "name": "BaseBdev4", 00:30:44.730 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:30:44.730 "is_configured": true, 00:30:44.730 "data_offset": 2048, 00:30:44.730 "data_size": 63488 00:30:44.730 } 00:30:44.730 ] 00:30:44.730 }' 00:30:44.730 04:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:44.730 04:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:44.730 04:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:44.730 04:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:44.730 04:25:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:44.731 [2024-07-23 04:25:53.417549] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:30:44.988 [2024-07-23 04:25:53.527642] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:30:45.943 04:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:45.943 04:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:45.943 04:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:45.943 04:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:45.943 04:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:45.943 04:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:45.943 04:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:45.943 04:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:45.943 04:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:45.943 "name": "raid_bdev1", 00:30:45.943 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:30:45.943 "strip_size_kb": 0, 00:30:45.943 "state": "online", 00:30:45.943 "raid_level": "raid1", 00:30:45.943 "superblock": true, 00:30:45.943 "num_base_bdevs": 4, 00:30:45.943 "num_base_bdevs_discovered": 3, 00:30:45.943 "num_base_bdevs_operational": 3, 00:30:45.943 "process": { 00:30:45.943 "type": "rebuild", 00:30:45.943 "target": "spare", 00:30:45.943 "progress": { 00:30:45.943 "blocks": 59392, 00:30:45.943 "percent": 93 00:30:45.943 } 00:30:45.943 }, 00:30:45.943 "base_bdevs_list": [ 00:30:45.943 { 00:30:45.943 "name": "spare", 00:30:45.943 "uuid": "adc88d25-4fff-5e8c-a86a-0e4ddaa0676a", 00:30:45.943 "is_configured": true, 00:30:45.943 "data_offset": 2048, 00:30:45.943 "data_size": 63488 00:30:45.943 }, 00:30:45.943 { 00:30:45.943 "name": null, 00:30:45.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:45.943 "is_configured": false, 00:30:45.943 "data_offset": 2048, 00:30:45.943 "data_size": 63488 00:30:45.943 }, 00:30:45.943 { 00:30:45.943 "name": "BaseBdev3", 00:30:45.943 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:30:45.943 "is_configured": true, 00:30:45.943 "data_offset": 2048, 00:30:45.943 "data_size": 63488 00:30:45.943 }, 00:30:45.943 { 00:30:45.943 "name": "BaseBdev4", 00:30:45.943 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:30:45.943 "is_configured": true, 00:30:45.943 "data_offset": 2048, 00:30:45.943 "data_size": 63488 00:30:45.943 } 00:30:45.943 ] 00:30:45.943 }' 00:30:45.943 04:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:45.943 04:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:45.943 04:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:46.215 04:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:46.215 04:25:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:46.215 [2024-07-23 04:25:54.790062] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:46.215 [2024-07-23 04:25:54.890278] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:46.215 [2024-07-23 04:25:54.892615] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:47.149 04:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:47.149 04:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:47.149 04:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:47.149 04:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:47.149 04:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:47.149 04:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:47.149 04:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:47.149 04:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:47.407 04:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:47.407 "name": "raid_bdev1", 00:30:47.407 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:30:47.407 "strip_size_kb": 0, 00:30:47.407 "state": "online", 00:30:47.407 "raid_level": "raid1", 00:30:47.407 "superblock": true, 00:30:47.407 "num_base_bdevs": 4, 00:30:47.407 "num_base_bdevs_discovered": 3, 00:30:47.407 "num_base_bdevs_operational": 3, 00:30:47.407 "base_bdevs_list": [ 00:30:47.407 { 00:30:47.407 "name": "spare", 00:30:47.407 "uuid": "adc88d25-4fff-5e8c-a86a-0e4ddaa0676a", 00:30:47.407 "is_configured": true, 00:30:47.407 "data_offset": 2048, 00:30:47.407 "data_size": 63488 00:30:47.407 }, 00:30:47.407 { 00:30:47.407 "name": null, 00:30:47.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:47.407 "is_configured": false, 00:30:47.407 "data_offset": 2048, 00:30:47.407 "data_size": 63488 00:30:47.407 }, 00:30:47.407 { 00:30:47.407 "name": "BaseBdev3", 00:30:47.408 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:30:47.408 "is_configured": true, 00:30:47.408 "data_offset": 2048, 00:30:47.408 "data_size": 63488 00:30:47.408 }, 00:30:47.408 { 00:30:47.408 "name": "BaseBdev4", 00:30:47.408 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:30:47.408 "is_configured": true, 00:30:47.408 "data_offset": 2048, 00:30:47.408 "data_size": 63488 00:30:47.408 } 00:30:47.408 ] 00:30:47.408 }' 00:30:47.408 04:25:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:47.408 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:47.408 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:47.408 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:47.408 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:30:47.408 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:47.408 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:47.408 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:47.408 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:47.408 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:47.408 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:47.408 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:47.666 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:47.666 "name": "raid_bdev1", 00:30:47.666 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:30:47.666 "strip_size_kb": 0, 00:30:47.666 "state": "online", 00:30:47.666 "raid_level": "raid1", 00:30:47.666 "superblock": true, 00:30:47.666 "num_base_bdevs": 4, 00:30:47.666 "num_base_bdevs_discovered": 3, 00:30:47.666 "num_base_bdevs_operational": 3, 00:30:47.666 "base_bdevs_list": [ 00:30:47.666 { 00:30:47.666 "name": "spare", 00:30:47.666 "uuid": "adc88d25-4fff-5e8c-a86a-0e4ddaa0676a", 00:30:47.666 "is_configured": true, 00:30:47.666 "data_offset": 2048, 00:30:47.666 "data_size": 63488 00:30:47.666 }, 00:30:47.666 { 00:30:47.666 "name": null, 00:30:47.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:47.666 "is_configured": false, 00:30:47.666 "data_offset": 2048, 00:30:47.666 "data_size": 63488 00:30:47.666 }, 00:30:47.666 { 00:30:47.666 "name": "BaseBdev3", 00:30:47.666 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:30:47.666 "is_configured": true, 00:30:47.666 "data_offset": 2048, 00:30:47.666 "data_size": 63488 00:30:47.666 }, 00:30:47.666 { 00:30:47.666 "name": "BaseBdev4", 00:30:47.666 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:30:47.666 "is_configured": true, 00:30:47.666 "data_offset": 2048, 00:30:47.666 "data_size": 63488 00:30:47.666 } 00:30:47.666 ] 00:30:47.666 }' 00:30:47.666 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:47.666 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:47.666 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:47.666 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:47.666 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:47.666 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:47.666 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:47.666 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:47.666 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:47.666 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:47.666 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:47.666 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:47.666 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:47.666 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:47.666 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:47.666 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:47.925 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:47.925 "name": "raid_bdev1", 00:30:47.925 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:30:47.925 "strip_size_kb": 0, 00:30:47.925 "state": "online", 00:30:47.925 "raid_level": "raid1", 00:30:47.925 "superblock": true, 00:30:47.925 "num_base_bdevs": 4, 00:30:47.925 "num_base_bdevs_discovered": 3, 00:30:47.925 "num_base_bdevs_operational": 3, 00:30:47.925 "base_bdevs_list": [ 00:30:47.925 { 00:30:47.925 "name": "spare", 00:30:47.925 "uuid": "adc88d25-4fff-5e8c-a86a-0e4ddaa0676a", 00:30:47.925 "is_configured": true, 00:30:47.925 "data_offset": 2048, 00:30:47.925 "data_size": 63488 00:30:47.925 }, 00:30:47.925 { 00:30:47.925 "name": null, 00:30:47.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:47.925 "is_configured": false, 00:30:47.925 "data_offset": 2048, 00:30:47.925 "data_size": 63488 00:30:47.925 }, 00:30:47.925 { 00:30:47.925 "name": "BaseBdev3", 00:30:47.925 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:30:47.925 "is_configured": true, 00:30:47.925 "data_offset": 2048, 00:30:47.925 "data_size": 63488 00:30:47.925 }, 00:30:47.925 { 00:30:47.925 "name": "BaseBdev4", 00:30:47.925 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:30:47.925 "is_configured": true, 00:30:47.925 "data_offset": 2048, 00:30:47.925 "data_size": 63488 00:30:47.925 } 00:30:47.925 ] 00:30:47.925 }' 00:30:47.925 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:47.925 04:25:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:48.491 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:48.749 [2024-07-23 04:25:57.381060] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:48.749 [2024-07-23 04:25:57.381107] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:48.749 00:30:48.749 Latency(us) 00:30:48.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.749 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:30:48.749 raid_bdev1 : 11.65 89.16 267.48 0.00 0.00 14971.52 339.15 121634.82 00:30:48.749 =================================================================================================================== 00:30:48.749 Total : 89.16 267.48 0.00 0.00 14971.52 339.15 121634.82 00:30:48.749 [2024-07-23 04:25:57.458174] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:48.749 [2024-07-23 04:25:57.458223] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:48.749 0 00:30:48.749 [2024-07-23 04:25:57.458346] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:48.749 [2024-07-23 04:25:57.458368] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:30:48.749 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:48.749 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:30:49.008 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:30:49.008 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:30:49.008 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:30:49.008 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:30:49.008 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:49.008 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:30:49.008 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:49.008 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:49.008 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:49.008 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:30:49.008 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:49.008 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:49.008 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:30:49.266 /dev/nbd0 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:49.266 1+0 records in 00:30:49.266 1+0 records out 00:30:49.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269018 s, 15.2 MB/s 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # continue 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:49.266 04:25:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:30:49.524 /dev/nbd1 00:30:49.524 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:49.524 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:49.524 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:30:49.524 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:30:49.524 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:49.524 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:49.524 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:30:49.524 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:30:49.524 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:49.524 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:49.524 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:49.524 1+0 records in 00:30:49.524 1+0 records out 00:30:49.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287017 s, 14.3 MB/s 00:30:49.524 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:49.524 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:30:49.525 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:49.525 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:49.525 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:30:49.525 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:49.525 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:49.525 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:49.782 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:30:49.782 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:49.782 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:49.782 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:49.782 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:30:49.782 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:49.782 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:50.040 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:50.040 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:50.040 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:50.040 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:50.040 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:50.040 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:50.040 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:30:50.040 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:50.040 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:30:50.040 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:30:50.041 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:30:50.041 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:50.041 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:30:50.041 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:50.041 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:50.041 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:50.041 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:30:50.041 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:50.041 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:50.041 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:30:50.299 /dev/nbd1 00:30:50.299 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:50.299 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:50.299 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:30:50.299 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:30:50.299 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:50.299 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:50.299 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:30:50.299 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:30:50.299 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:50.299 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:50.299 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:50.299 1+0 records in 00:30:50.299 1+0 records out 00:30:50.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283698 s, 14.4 MB/s 00:30:50.299 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:50.299 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:30:50.299 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:50.299 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:50.299 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:30:50.299 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:50.299 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:50.299 04:25:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:50.299 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:30:50.299 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:50.299 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:50.299 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:50.299 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:30:50.299 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:50.299 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:50.557 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:50.557 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:50.557 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:50.557 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:50.557 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:50.557 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:50.557 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:30:50.557 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:50.557 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:50.557 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:50.557 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:50.557 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:50.557 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:30:50.557 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:50.557 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:50.815 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:50.815 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:50.815 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:50.815 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:50.815 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:50.815 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:50.815 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:30:50.815 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:50.815 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:30:50.815 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:51.073 04:25:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:51.331 [2024-07-23 04:25:59.994668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:51.331 [2024-07-23 04:25:59.994736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:51.331 [2024-07-23 04:25:59.994767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044d80 00:30:51.331 [2024-07-23 04:25:59.994786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:51.331 [2024-07-23 04:25:59.997663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:51.331 [2024-07-23 04:25:59.997703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:51.331 [2024-07-23 04:25:59.997816] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:51.331 [2024-07-23 04:25:59.997883] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:51.331 [2024-07-23 04:25:59.998090] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:51.331 [2024-07-23 04:25:59.998220] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:51.331 spare 00:30:51.331 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:51.331 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:51.331 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:51.331 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:51.331 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:51.331 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:51.331 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:51.331 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:51.331 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:51.331 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:51.331 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:51.331 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:51.331 [2024-07-23 04:26:00.098570] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000045380 00:30:51.331 [2024-07-23 04:26:00.098608] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:51.331 [2024-07-23 04:26:00.098982] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000041990 00:30:51.331 [2024-07-23 04:26:00.099283] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000045380 00:30:51.331 [2024-07-23 04:26:00.099300] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000045380 00:30:51.331 [2024-07-23 04:26:00.099524] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:51.589 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:51.589 "name": "raid_bdev1", 00:30:51.589 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:30:51.589 "strip_size_kb": 0, 00:30:51.589 "state": "online", 00:30:51.589 "raid_level": "raid1", 00:30:51.589 "superblock": true, 00:30:51.589 "num_base_bdevs": 4, 00:30:51.589 "num_base_bdevs_discovered": 3, 00:30:51.589 "num_base_bdevs_operational": 3, 00:30:51.589 "base_bdevs_list": [ 00:30:51.589 { 00:30:51.589 "name": "spare", 00:30:51.589 "uuid": "adc88d25-4fff-5e8c-a86a-0e4ddaa0676a", 00:30:51.589 "is_configured": true, 00:30:51.589 "data_offset": 2048, 00:30:51.589 "data_size": 63488 00:30:51.589 }, 00:30:51.589 { 00:30:51.589 "name": null, 00:30:51.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:51.589 "is_configured": false, 00:30:51.589 "data_offset": 2048, 00:30:51.589 "data_size": 63488 00:30:51.589 }, 00:30:51.589 { 00:30:51.589 "name": "BaseBdev3", 00:30:51.589 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:30:51.589 "is_configured": true, 00:30:51.590 "data_offset": 2048, 00:30:51.590 "data_size": 63488 00:30:51.590 }, 00:30:51.590 { 00:30:51.590 "name": "BaseBdev4", 00:30:51.590 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:30:51.590 "is_configured": true, 00:30:51.590 "data_offset": 2048, 00:30:51.590 "data_size": 63488 00:30:51.590 } 00:30:51.590 ] 00:30:51.590 }' 00:30:51.590 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:51.590 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:52.197 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:52.197 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:52.197 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:52.197 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:52.197 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:52.197 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:52.197 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:52.197 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:52.197 "name": "raid_bdev1", 00:30:52.197 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:30:52.197 "strip_size_kb": 0, 00:30:52.197 "state": "online", 00:30:52.197 "raid_level": "raid1", 00:30:52.197 "superblock": true, 00:30:52.197 "num_base_bdevs": 4, 00:30:52.197 "num_base_bdevs_discovered": 3, 00:30:52.197 "num_base_bdevs_operational": 3, 00:30:52.197 "base_bdevs_list": [ 00:30:52.197 { 00:30:52.197 "name": "spare", 00:30:52.197 "uuid": "adc88d25-4fff-5e8c-a86a-0e4ddaa0676a", 00:30:52.197 "is_configured": true, 00:30:52.197 "data_offset": 2048, 00:30:52.197 "data_size": 63488 00:30:52.197 }, 00:30:52.197 { 00:30:52.197 "name": null, 00:30:52.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:52.197 "is_configured": false, 00:30:52.197 "data_offset": 2048, 00:30:52.197 "data_size": 63488 00:30:52.197 }, 00:30:52.197 { 00:30:52.197 "name": "BaseBdev3", 00:30:52.197 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:30:52.197 "is_configured": true, 00:30:52.197 "data_offset": 2048, 00:30:52.197 "data_size": 63488 00:30:52.197 }, 00:30:52.197 { 00:30:52.197 "name": "BaseBdev4", 00:30:52.197 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:30:52.197 "is_configured": true, 00:30:52.197 "data_offset": 2048, 00:30:52.197 "data_size": 63488 00:30:52.197 } 00:30:52.197 ] 00:30:52.197 }' 00:30:52.197 04:26:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:52.455 04:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:52.455 04:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:52.455 04:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:52.455 04:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:52.455 04:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:52.714 04:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:30:52.714 04:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:53.280 [2024-07-23 04:26:01.776392] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:53.280 04:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:53.280 04:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:53.280 04:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:53.280 04:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:53.280 04:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:53.280 04:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:53.280 04:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:53.280 04:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:53.280 04:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:53.280 04:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:53.280 04:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:53.280 04:26:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:53.280 04:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:53.280 "name": "raid_bdev1", 00:30:53.280 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:30:53.280 "strip_size_kb": 0, 00:30:53.280 "state": "online", 00:30:53.280 "raid_level": "raid1", 00:30:53.280 "superblock": true, 00:30:53.280 "num_base_bdevs": 4, 00:30:53.280 "num_base_bdevs_discovered": 2, 00:30:53.280 "num_base_bdevs_operational": 2, 00:30:53.280 "base_bdevs_list": [ 00:30:53.280 { 00:30:53.280 "name": null, 00:30:53.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:53.280 "is_configured": false, 00:30:53.280 "data_offset": 2048, 00:30:53.280 "data_size": 63488 00:30:53.280 }, 00:30:53.280 { 00:30:53.280 "name": null, 00:30:53.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:53.280 "is_configured": false, 00:30:53.280 "data_offset": 2048, 00:30:53.280 "data_size": 63488 00:30:53.280 }, 00:30:53.280 { 00:30:53.280 "name": "BaseBdev3", 00:30:53.280 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:30:53.280 "is_configured": true, 00:30:53.280 "data_offset": 2048, 00:30:53.280 "data_size": 63488 00:30:53.280 }, 00:30:53.280 { 00:30:53.280 "name": "BaseBdev4", 00:30:53.280 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:30:53.280 "is_configured": true, 00:30:53.280 "data_offset": 2048, 00:30:53.280 "data_size": 63488 00:30:53.280 } 00:30:53.280 ] 00:30:53.280 }' 00:30:53.280 04:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:53.280 04:26:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:53.847 04:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:54.105 [2024-07-23 04:26:02.831437] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:54.105 [2024-07-23 04:26:02.831678] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:54.105 [2024-07-23 04:26:02.831706] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:54.105 [2024-07-23 04:26:02.831749] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:54.105 [2024-07-23 04:26:02.855941] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000041a60 00:30:54.105 [2024-07-23 04:26:02.858311] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:54.105 04:26:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:30:55.480 04:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:55.480 04:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:55.480 04:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:55.480 04:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:55.480 04:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:55.480 04:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:55.480 04:26:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:55.480 04:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:55.480 "name": "raid_bdev1", 00:30:55.480 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:30:55.480 "strip_size_kb": 0, 00:30:55.480 "state": "online", 00:30:55.480 "raid_level": "raid1", 00:30:55.480 "superblock": true, 00:30:55.480 "num_base_bdevs": 4, 00:30:55.480 "num_base_bdevs_discovered": 3, 00:30:55.480 "num_base_bdevs_operational": 3, 00:30:55.480 "process": { 00:30:55.480 "type": "rebuild", 00:30:55.480 "target": "spare", 00:30:55.480 "progress": { 00:30:55.480 "blocks": 24576, 00:30:55.480 "percent": 38 00:30:55.480 } 00:30:55.480 }, 00:30:55.480 "base_bdevs_list": [ 00:30:55.480 { 00:30:55.480 "name": "spare", 00:30:55.480 "uuid": "adc88d25-4fff-5e8c-a86a-0e4ddaa0676a", 00:30:55.480 "is_configured": true, 00:30:55.480 "data_offset": 2048, 00:30:55.480 "data_size": 63488 00:30:55.480 }, 00:30:55.480 { 00:30:55.480 "name": null, 00:30:55.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:55.480 "is_configured": false, 00:30:55.480 "data_offset": 2048, 00:30:55.480 "data_size": 63488 00:30:55.480 }, 00:30:55.480 { 00:30:55.480 "name": "BaseBdev3", 00:30:55.480 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:30:55.480 "is_configured": true, 00:30:55.480 "data_offset": 2048, 00:30:55.480 "data_size": 63488 00:30:55.480 }, 00:30:55.480 { 00:30:55.480 "name": "BaseBdev4", 00:30:55.480 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:30:55.480 "is_configured": true, 00:30:55.480 "data_offset": 2048, 00:30:55.480 "data_size": 63488 00:30:55.480 } 00:30:55.480 ] 00:30:55.480 }' 00:30:55.480 04:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:55.480 04:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:55.480 04:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:55.480 04:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:55.480 04:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:55.739 [2024-07-23 04:26:04.407989] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:55.739 [2024-07-23 04:26:04.471483] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:55.739 [2024-07-23 04:26:04.471558] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:55.739 [2024-07-23 04:26:04.471581] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:55.739 [2024-07-23 04:26:04.471597] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:55.739 04:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:55.739 04:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:55.739 04:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:55.739 04:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:55.739 04:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:55.739 04:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:55.739 04:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:55.739 04:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:55.739 04:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:55.739 04:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:55.739 04:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:55.739 04:26:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:56.340 04:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:56.340 "name": "raid_bdev1", 00:30:56.340 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:30:56.340 "strip_size_kb": 0, 00:30:56.340 "state": "online", 00:30:56.340 "raid_level": "raid1", 00:30:56.340 "superblock": true, 00:30:56.340 "num_base_bdevs": 4, 00:30:56.340 "num_base_bdevs_discovered": 2, 00:30:56.340 "num_base_bdevs_operational": 2, 00:30:56.340 "base_bdevs_list": [ 00:30:56.340 { 00:30:56.340 "name": null, 00:30:56.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:56.340 "is_configured": false, 00:30:56.340 "data_offset": 2048, 00:30:56.340 "data_size": 63488 00:30:56.340 }, 00:30:56.340 { 00:30:56.340 "name": null, 00:30:56.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:56.340 "is_configured": false, 00:30:56.340 "data_offset": 2048, 00:30:56.340 "data_size": 63488 00:30:56.340 }, 00:30:56.340 { 00:30:56.340 "name": "BaseBdev3", 00:30:56.340 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:30:56.340 "is_configured": true, 00:30:56.340 "data_offset": 2048, 00:30:56.340 "data_size": 63488 00:30:56.340 }, 00:30:56.340 { 00:30:56.340 "name": "BaseBdev4", 00:30:56.340 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:30:56.340 "is_configured": true, 00:30:56.340 "data_offset": 2048, 00:30:56.340 "data_size": 63488 00:30:56.340 } 00:30:56.340 ] 00:30:56.340 }' 00:30:56.340 04:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:56.340 04:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:56.907 04:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:57.166 [2024-07-23 04:26:05.768349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:57.166 [2024-07-23 04:26:05.768430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:57.166 [2024-07-23 04:26:05.768459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045980 00:30:57.166 [2024-07-23 04:26:05.768478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:57.166 [2024-07-23 04:26:05.769094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:57.166 [2024-07-23 04:26:05.769125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:57.166 [2024-07-23 04:26:05.769264] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:57.166 [2024-07-23 04:26:05.769288] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:57.166 [2024-07-23 04:26:05.769306] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:57.166 [2024-07-23 04:26:05.769349] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:57.166 [2024-07-23 04:26:05.794011] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000041b30 00:30:57.166 spare 00:30:57.166 [2024-07-23 04:26:05.796409] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:57.166 04:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:30:58.102 04:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:58.102 04:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:58.103 04:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:58.103 04:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:58.103 04:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:58.103 04:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:58.103 04:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:58.362 04:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:58.362 "name": "raid_bdev1", 00:30:58.362 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:30:58.362 "strip_size_kb": 0, 00:30:58.362 "state": "online", 00:30:58.362 "raid_level": "raid1", 00:30:58.362 "superblock": true, 00:30:58.362 "num_base_bdevs": 4, 00:30:58.362 "num_base_bdevs_discovered": 3, 00:30:58.362 "num_base_bdevs_operational": 3, 00:30:58.362 "process": { 00:30:58.362 "type": "rebuild", 00:30:58.362 "target": "spare", 00:30:58.362 "progress": { 00:30:58.362 "blocks": 24576, 00:30:58.362 "percent": 38 00:30:58.362 } 00:30:58.362 }, 00:30:58.362 "base_bdevs_list": [ 00:30:58.362 { 00:30:58.362 "name": "spare", 00:30:58.362 "uuid": "adc88d25-4fff-5e8c-a86a-0e4ddaa0676a", 00:30:58.362 "is_configured": true, 00:30:58.362 "data_offset": 2048, 00:30:58.362 "data_size": 63488 00:30:58.362 }, 00:30:58.362 { 00:30:58.362 "name": null, 00:30:58.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:58.362 "is_configured": false, 00:30:58.362 "data_offset": 2048, 00:30:58.362 "data_size": 63488 00:30:58.362 }, 00:30:58.362 { 00:30:58.362 "name": "BaseBdev3", 00:30:58.362 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:30:58.362 "is_configured": true, 00:30:58.362 "data_offset": 2048, 00:30:58.362 "data_size": 63488 00:30:58.362 }, 00:30:58.362 { 00:30:58.362 "name": "BaseBdev4", 00:30:58.362 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:30:58.362 "is_configured": true, 00:30:58.362 "data_offset": 2048, 00:30:58.362 "data_size": 63488 00:30:58.362 } 00:30:58.362 ] 00:30:58.362 }' 00:30:58.362 04:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:58.362 04:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:58.362 04:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:58.362 04:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:58.362 04:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:58.621 [2024-07-23 04:26:07.329993] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:58.880 [2024-07-23 04:26:07.409588] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:58.880 [2024-07-23 04:26:07.409651] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:58.880 [2024-07-23 04:26:07.409679] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:58.880 [2024-07-23 04:26:07.409692] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:58.880 04:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:58.880 04:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:58.880 04:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:58.880 04:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:58.880 04:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:58.880 04:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:58.880 04:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:58.880 04:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:58.880 04:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:58.880 04:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:58.880 04:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:58.880 04:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:59.139 04:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:59.139 "name": "raid_bdev1", 00:30:59.139 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:30:59.139 "strip_size_kb": 0, 00:30:59.139 "state": "online", 00:30:59.139 "raid_level": "raid1", 00:30:59.139 "superblock": true, 00:30:59.139 "num_base_bdevs": 4, 00:30:59.139 "num_base_bdevs_discovered": 2, 00:30:59.139 "num_base_bdevs_operational": 2, 00:30:59.139 "base_bdevs_list": [ 00:30:59.139 { 00:30:59.139 "name": null, 00:30:59.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:59.139 "is_configured": false, 00:30:59.139 "data_offset": 2048, 00:30:59.139 "data_size": 63488 00:30:59.139 }, 00:30:59.139 { 00:30:59.139 "name": null, 00:30:59.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:59.139 "is_configured": false, 00:30:59.139 "data_offset": 2048, 00:30:59.139 "data_size": 63488 00:30:59.139 }, 00:30:59.139 { 00:30:59.139 "name": "BaseBdev3", 00:30:59.139 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:30:59.139 "is_configured": true, 00:30:59.139 "data_offset": 2048, 00:30:59.139 "data_size": 63488 00:30:59.139 }, 00:30:59.139 { 00:30:59.139 "name": "BaseBdev4", 00:30:59.140 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:30:59.140 "is_configured": true, 00:30:59.140 "data_offset": 2048, 00:30:59.140 "data_size": 63488 00:30:59.140 } 00:30:59.140 ] 00:30:59.140 }' 00:30:59.140 04:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:59.140 04:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:59.708 04:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:59.708 04:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:59.708 04:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:59.708 04:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:59.708 04:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:59.708 04:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:59.708 04:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:59.708 04:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:59.708 "name": "raid_bdev1", 00:30:59.708 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:30:59.708 "strip_size_kb": 0, 00:30:59.708 "state": "online", 00:30:59.708 "raid_level": "raid1", 00:30:59.708 "superblock": true, 00:30:59.708 "num_base_bdevs": 4, 00:30:59.708 "num_base_bdevs_discovered": 2, 00:30:59.708 "num_base_bdevs_operational": 2, 00:30:59.708 "base_bdevs_list": [ 00:30:59.708 { 00:30:59.708 "name": null, 00:30:59.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:59.708 "is_configured": false, 00:30:59.708 "data_offset": 2048, 00:30:59.708 "data_size": 63488 00:30:59.708 }, 00:30:59.708 { 00:30:59.708 "name": null, 00:30:59.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:59.708 "is_configured": false, 00:30:59.708 "data_offset": 2048, 00:30:59.708 "data_size": 63488 00:30:59.708 }, 00:30:59.708 { 00:30:59.708 "name": "BaseBdev3", 00:30:59.708 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:30:59.708 "is_configured": true, 00:30:59.708 "data_offset": 2048, 00:30:59.708 "data_size": 63488 00:30:59.708 }, 00:30:59.708 { 00:30:59.708 "name": "BaseBdev4", 00:30:59.708 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:30:59.708 "is_configured": true, 00:30:59.708 "data_offset": 2048, 00:30:59.708 "data_size": 63488 00:30:59.708 } 00:30:59.708 ] 00:30:59.708 }' 00:30:59.708 04:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:59.967 04:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:59.967 04:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:59.967 04:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:59.967 04:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:31:00.537 04:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:00.537 [2024-07-23 04:26:09.283649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:00.537 [2024-07-23 04:26:09.283723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:00.538 [2024-07-23 04:26:09.283754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045f80 00:31:00.538 [2024-07-23 04:26:09.283771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:00.538 [2024-07-23 04:26:09.284390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:00.538 [2024-07-23 04:26:09.284420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:00.538 [2024-07-23 04:26:09.284532] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:31:00.538 [2024-07-23 04:26:09.284552] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:31:00.538 [2024-07-23 04:26:09.284568] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:00.538 BaseBdev1 00:31:00.538 04:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:31:01.922 04:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:01.922 04:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:01.922 04:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:01.922 04:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:01.922 04:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:01.922 04:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:01.922 04:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:01.922 04:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:01.922 04:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:01.922 04:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:01.922 04:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:01.922 04:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:01.922 04:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:01.922 "name": "raid_bdev1", 00:31:01.922 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:31:01.922 "strip_size_kb": 0, 00:31:01.922 "state": "online", 00:31:01.922 "raid_level": "raid1", 00:31:01.922 "superblock": true, 00:31:01.922 "num_base_bdevs": 4, 00:31:01.922 "num_base_bdevs_discovered": 2, 00:31:01.922 "num_base_bdevs_operational": 2, 00:31:01.922 "base_bdevs_list": [ 00:31:01.922 { 00:31:01.922 "name": null, 00:31:01.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:01.922 "is_configured": false, 00:31:01.922 "data_offset": 2048, 00:31:01.922 "data_size": 63488 00:31:01.922 }, 00:31:01.922 { 00:31:01.922 "name": null, 00:31:01.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:01.922 "is_configured": false, 00:31:01.922 "data_offset": 2048, 00:31:01.922 "data_size": 63488 00:31:01.922 }, 00:31:01.922 { 00:31:01.922 "name": "BaseBdev3", 00:31:01.922 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:31:01.922 "is_configured": true, 00:31:01.922 "data_offset": 2048, 00:31:01.922 "data_size": 63488 00:31:01.922 }, 00:31:01.922 { 00:31:01.922 "name": "BaseBdev4", 00:31:01.922 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:31:01.922 "is_configured": true, 00:31:01.922 "data_offset": 2048, 00:31:01.922 "data_size": 63488 00:31:01.922 } 00:31:01.922 ] 00:31:01.922 }' 00:31:01.922 04:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:01.922 04:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:02.859 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:02.859 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:02.859 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:02.859 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:02.859 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:02.859 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:02.859 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:02.859 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:02.859 "name": "raid_bdev1", 00:31:02.859 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:31:02.859 "strip_size_kb": 0, 00:31:02.859 "state": "online", 00:31:02.859 "raid_level": "raid1", 00:31:02.859 "superblock": true, 00:31:02.859 "num_base_bdevs": 4, 00:31:02.859 "num_base_bdevs_discovered": 2, 00:31:02.859 "num_base_bdevs_operational": 2, 00:31:02.859 "base_bdevs_list": [ 00:31:02.859 { 00:31:02.859 "name": null, 00:31:02.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:02.859 "is_configured": false, 00:31:02.859 "data_offset": 2048, 00:31:02.859 "data_size": 63488 00:31:02.859 }, 00:31:02.859 { 00:31:02.859 "name": null, 00:31:02.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:02.859 "is_configured": false, 00:31:02.859 "data_offset": 2048, 00:31:02.859 "data_size": 63488 00:31:02.859 }, 00:31:02.859 { 00:31:02.859 "name": "BaseBdev3", 00:31:02.859 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:31:02.859 "is_configured": true, 00:31:02.859 "data_offset": 2048, 00:31:02.859 "data_size": 63488 00:31:02.859 }, 00:31:02.859 { 00:31:02.859 "name": "BaseBdev4", 00:31:02.859 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:31:02.859 "is_configured": true, 00:31:02.859 "data_offset": 2048, 00:31:02.859 "data_size": 63488 00:31:02.859 } 00:31:02.859 ] 00:31:02.859 }' 00:31:02.859 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:03.118 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:03.118 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:03.118 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:03.118 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:03.118 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:31:03.118 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:03.118 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:03.119 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:03.119 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:03.119 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:03.119 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:03.119 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:03.119 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:03.119 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:31:03.119 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:03.119 [2024-07-23 04:26:11.895154] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:03.119 [2024-07-23 04:26:11.895342] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:31:03.119 [2024-07-23 04:26:11.895364] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:03.119 request: 00:31:03.119 { 00:31:03.119 "base_bdev": "BaseBdev1", 00:31:03.119 "raid_bdev": "raid_bdev1", 00:31:03.119 "method": "bdev_raid_add_base_bdev", 00:31:03.119 "req_id": 1 00:31:03.119 } 00:31:03.119 Got JSON-RPC error response 00:31:03.119 response: 00:31:03.119 { 00:31:03.119 "code": -22, 00:31:03.119 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:31:03.119 } 00:31:03.378 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:31:03.378 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:03.378 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:03.378 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:03.378 04:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:31:04.314 04:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:04.314 04:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:04.314 04:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:04.314 04:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:04.314 04:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:04.314 04:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:04.314 04:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:04.314 04:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:04.314 04:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:04.314 04:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:04.314 04:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:04.314 04:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:04.573 04:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:04.574 "name": "raid_bdev1", 00:31:04.574 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:31:04.574 "strip_size_kb": 0, 00:31:04.574 "state": "online", 00:31:04.574 "raid_level": "raid1", 00:31:04.574 "superblock": true, 00:31:04.574 "num_base_bdevs": 4, 00:31:04.574 "num_base_bdevs_discovered": 2, 00:31:04.574 "num_base_bdevs_operational": 2, 00:31:04.574 "base_bdevs_list": [ 00:31:04.574 { 00:31:04.574 "name": null, 00:31:04.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.574 "is_configured": false, 00:31:04.574 "data_offset": 2048, 00:31:04.574 "data_size": 63488 00:31:04.574 }, 00:31:04.574 { 00:31:04.574 "name": null, 00:31:04.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.574 "is_configured": false, 00:31:04.574 "data_offset": 2048, 00:31:04.574 "data_size": 63488 00:31:04.574 }, 00:31:04.574 { 00:31:04.574 "name": "BaseBdev3", 00:31:04.574 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:31:04.574 "is_configured": true, 00:31:04.574 "data_offset": 2048, 00:31:04.574 "data_size": 63488 00:31:04.574 }, 00:31:04.574 { 00:31:04.574 "name": "BaseBdev4", 00:31:04.574 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:31:04.574 "is_configured": true, 00:31:04.574 "data_offset": 2048, 00:31:04.574 "data_size": 63488 00:31:04.574 } 00:31:04.574 ] 00:31:04.574 }' 00:31:04.574 04:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:04.574 04:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:05.141 04:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:05.142 04:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:05.142 04:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:05.142 04:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:05.142 04:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:05.142 04:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:05.142 04:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:05.142 04:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:05.142 "name": "raid_bdev1", 00:31:05.142 "uuid": "4ec49faf-e107-49a5-80d7-b60261ea46f7", 00:31:05.142 "strip_size_kb": 0, 00:31:05.142 "state": "online", 00:31:05.142 "raid_level": "raid1", 00:31:05.142 "superblock": true, 00:31:05.142 "num_base_bdevs": 4, 00:31:05.142 "num_base_bdevs_discovered": 2, 00:31:05.142 "num_base_bdevs_operational": 2, 00:31:05.142 "base_bdevs_list": [ 00:31:05.142 { 00:31:05.142 "name": null, 00:31:05.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:05.142 "is_configured": false, 00:31:05.142 "data_offset": 2048, 00:31:05.142 "data_size": 63488 00:31:05.142 }, 00:31:05.142 { 00:31:05.142 "name": null, 00:31:05.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:05.142 "is_configured": false, 00:31:05.142 "data_offset": 2048, 00:31:05.142 "data_size": 63488 00:31:05.142 }, 00:31:05.142 { 00:31:05.142 "name": "BaseBdev3", 00:31:05.142 "uuid": "5cc43b20-b2ad-56be-ae03-2581bd4d197d", 00:31:05.142 "is_configured": true, 00:31:05.142 "data_offset": 2048, 00:31:05.142 "data_size": 63488 00:31:05.142 }, 00:31:05.142 { 00:31:05.142 "name": "BaseBdev4", 00:31:05.142 "uuid": "56038954-e7a4-58ee-89e5-7af3c8cd51be", 00:31:05.142 "is_configured": true, 00:31:05.142 "data_offset": 2048, 00:31:05.142 "data_size": 63488 00:31:05.142 } 00:31:05.142 ] 00:31:05.142 }' 00:31:05.142 04:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:05.401 04:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:05.401 04:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:05.401 04:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:05.401 04:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 2796331 00:31:05.401 04:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 2796331 ']' 00:31:05.401 04:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 2796331 00:31:05.401 04:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:31:05.401 04:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:05.401 04:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2796331 00:31:05.401 04:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:05.401 04:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:05.401 04:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2796331' 00:31:05.401 killing process with pid 2796331 00:31:05.401 04:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 2796331 00:31:05.401 Received shutdown signal, test time was about 28.244964 seconds 00:31:05.401 00:31:05.401 Latency(us) 00:31:05.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.401 =================================================================================================================== 00:31:05.401 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:05.401 04:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 2796331 00:31:05.401 [2024-07-23 04:26:14.061095] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:05.401 [2024-07-23 04:26:14.061255] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:05.401 [2024-07-23 04:26:14.061351] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:05.401 [2024-07-23 04:26:14.061369] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000045380 name raid_bdev1, state offline 00:31:05.972 [2024-07-23 04:26:14.552873] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:07.880 04:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:31:07.880 00:31:07.880 real 0m36.057s 00:31:07.880 user 0m55.130s 00:31:07.880 sys 0m5.294s 00:31:07.880 04:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:07.880 04:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:07.880 ************************************ 00:31:07.880 END TEST raid_rebuild_test_sb_io 00:31:07.880 ************************************ 00:31:07.880 04:26:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:31:07.880 04:26:16 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' n == y ']' 00:31:07.880 04:26:16 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:31:07.880 04:26:16 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:31:07.880 04:26:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:31:07.880 04:26:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:07.880 04:26:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:07.880 ************************************ 00:31:07.881 START TEST raid_state_function_test_sb_4k 00:31:07.881 ************************************ 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=2802636 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2802636' 00:31:07.881 Process raid pid: 2802636 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 2802636 /var/tmp/spdk-raid.sock 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 2802636 ']' 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:07.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:07.881 04:26:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:31:07.881 [2024-07-23 04:26:16.558201] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:07.881 [2024-07-23 04:26:16.558320] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:08.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:08.143 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:08.143 [2024-07-23 04:26:16.784749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.401 [2024-07-23 04:26:17.049831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.660 [2024-07-23 04:26:17.359718] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:08.660 [2024-07-23 04:26:17.359755] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:08.920 04:26:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:08.920 04:26:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:31:08.920 04:26:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:09.179 [2024-07-23 04:26:17.724342] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:09.179 [2024-07-23 04:26:17.724395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:09.179 [2024-07-23 04:26:17.724410] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:09.179 [2024-07-23 04:26:17.724426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:09.179 04:26:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:09.179 04:26:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:09.179 04:26:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:09.179 04:26:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:09.179 04:26:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:09.179 04:26:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:09.179 04:26:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:09.179 04:26:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:09.179 04:26:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:09.179 04:26:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:09.179 04:26:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:09.179 04:26:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:09.438 04:26:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:09.438 "name": "Existed_Raid", 00:31:09.438 "uuid": "98909463-d368-459c-80a3-094420449d9d", 00:31:09.438 "strip_size_kb": 0, 00:31:09.438 "state": "configuring", 00:31:09.438 "raid_level": "raid1", 00:31:09.438 "superblock": true, 00:31:09.438 "num_base_bdevs": 2, 00:31:09.438 "num_base_bdevs_discovered": 0, 00:31:09.438 "num_base_bdevs_operational": 2, 00:31:09.438 "base_bdevs_list": [ 00:31:09.438 { 00:31:09.438 "name": "BaseBdev1", 00:31:09.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:09.438 "is_configured": false, 00:31:09.438 "data_offset": 0, 00:31:09.438 "data_size": 0 00:31:09.438 }, 00:31:09.438 { 00:31:09.438 "name": "BaseBdev2", 00:31:09.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:09.438 "is_configured": false, 00:31:09.438 "data_offset": 0, 00:31:09.438 "data_size": 0 00:31:09.438 } 00:31:09.438 ] 00:31:09.438 }' 00:31:09.438 04:26:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:09.438 04:26:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:10.006 04:26:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:10.006 [2024-07-23 04:26:18.746930] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:10.006 [2024-07-23 04:26:18.746971] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:31:10.006 04:26:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:10.265 [2024-07-23 04:26:18.975598] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:10.265 [2024-07-23 04:26:18.975644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:10.265 [2024-07-23 04:26:18.975658] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:10.265 [2024-07-23 04:26:18.975675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:10.265 04:26:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:31:10.524 [2024-07-23 04:26:19.258654] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:10.524 BaseBdev1 00:31:10.524 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:31:10.524 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:31:10.524 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:10.524 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:31:10.524 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:10.524 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:10.524 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:10.783 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:11.042 [ 00:31:11.042 { 00:31:11.042 "name": "BaseBdev1", 00:31:11.042 "aliases": [ 00:31:11.042 "90518fa5-f57c-40ca-9bf0-20a019b837d7" 00:31:11.042 ], 00:31:11.042 "product_name": "Malloc disk", 00:31:11.042 "block_size": 4096, 00:31:11.042 "num_blocks": 8192, 00:31:11.042 "uuid": "90518fa5-f57c-40ca-9bf0-20a019b837d7", 00:31:11.042 "assigned_rate_limits": { 00:31:11.042 "rw_ios_per_sec": 0, 00:31:11.042 "rw_mbytes_per_sec": 0, 00:31:11.042 "r_mbytes_per_sec": 0, 00:31:11.042 "w_mbytes_per_sec": 0 00:31:11.042 }, 00:31:11.042 "claimed": true, 00:31:11.042 "claim_type": "exclusive_write", 00:31:11.042 "zoned": false, 00:31:11.042 "supported_io_types": { 00:31:11.042 "read": true, 00:31:11.042 "write": true, 00:31:11.042 "unmap": true, 00:31:11.042 "flush": true, 00:31:11.042 "reset": true, 00:31:11.042 "nvme_admin": false, 00:31:11.042 "nvme_io": false, 00:31:11.042 "nvme_io_md": false, 00:31:11.042 "write_zeroes": true, 00:31:11.042 "zcopy": true, 00:31:11.042 "get_zone_info": false, 00:31:11.042 "zone_management": false, 00:31:11.042 "zone_append": false, 00:31:11.042 "compare": false, 00:31:11.042 "compare_and_write": false, 00:31:11.042 "abort": true, 00:31:11.042 "seek_hole": false, 00:31:11.042 "seek_data": false, 00:31:11.042 "copy": true, 00:31:11.042 "nvme_iov_md": false 00:31:11.042 }, 00:31:11.042 "memory_domains": [ 00:31:11.042 { 00:31:11.042 "dma_device_id": "system", 00:31:11.042 "dma_device_type": 1 00:31:11.042 }, 00:31:11.042 { 00:31:11.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:11.042 "dma_device_type": 2 00:31:11.042 } 00:31:11.042 ], 00:31:11.042 "driver_specific": {} 00:31:11.042 } 00:31:11.042 ] 00:31:11.042 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:31:11.042 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:11.042 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:11.042 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:11.042 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:11.042 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:11.042 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:11.042 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:11.042 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:11.042 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:11.042 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:11.042 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:11.042 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:11.301 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:11.301 "name": "Existed_Raid", 00:31:11.301 "uuid": "c3766a54-41cb-48a5-ae9a-64a31877706c", 00:31:11.301 "strip_size_kb": 0, 00:31:11.301 "state": "configuring", 00:31:11.301 "raid_level": "raid1", 00:31:11.301 "superblock": true, 00:31:11.301 "num_base_bdevs": 2, 00:31:11.301 "num_base_bdevs_discovered": 1, 00:31:11.301 "num_base_bdevs_operational": 2, 00:31:11.301 "base_bdevs_list": [ 00:31:11.301 { 00:31:11.301 "name": "BaseBdev1", 00:31:11.301 "uuid": "90518fa5-f57c-40ca-9bf0-20a019b837d7", 00:31:11.301 "is_configured": true, 00:31:11.301 "data_offset": 256, 00:31:11.301 "data_size": 7936 00:31:11.301 }, 00:31:11.301 { 00:31:11.301 "name": "BaseBdev2", 00:31:11.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:11.301 "is_configured": false, 00:31:11.301 "data_offset": 0, 00:31:11.301 "data_size": 0 00:31:11.301 } 00:31:11.301 ] 00:31:11.301 }' 00:31:11.301 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:11.301 04:26:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:11.870 04:26:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:12.129 [2024-07-23 04:26:20.722662] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:12.129 [2024-07-23 04:26:20.722717] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:31:12.129 04:26:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:12.387 [2024-07-23 04:26:20.951359] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:12.387 [2024-07-23 04:26:20.953646] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:12.387 [2024-07-23 04:26:20.953689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:12.387 04:26:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:31:12.387 04:26:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:12.387 04:26:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:12.387 04:26:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:12.387 04:26:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:12.387 04:26:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:12.388 04:26:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:12.388 04:26:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:12.388 04:26:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:12.388 04:26:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:12.388 04:26:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:12.388 04:26:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:12.388 04:26:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:12.388 04:26:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:12.645 04:26:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:12.645 "name": "Existed_Raid", 00:31:12.645 "uuid": "b5b95779-ecd5-4daf-811a-5a8cd1a0d88f", 00:31:12.645 "strip_size_kb": 0, 00:31:12.645 "state": "configuring", 00:31:12.645 "raid_level": "raid1", 00:31:12.645 "superblock": true, 00:31:12.645 "num_base_bdevs": 2, 00:31:12.645 "num_base_bdevs_discovered": 1, 00:31:12.645 "num_base_bdevs_operational": 2, 00:31:12.645 "base_bdevs_list": [ 00:31:12.645 { 00:31:12.645 "name": "BaseBdev1", 00:31:12.645 "uuid": "90518fa5-f57c-40ca-9bf0-20a019b837d7", 00:31:12.645 "is_configured": true, 00:31:12.645 "data_offset": 256, 00:31:12.645 "data_size": 7936 00:31:12.645 }, 00:31:12.645 { 00:31:12.645 "name": "BaseBdev2", 00:31:12.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:12.645 "is_configured": false, 00:31:12.645 "data_offset": 0, 00:31:12.645 "data_size": 0 00:31:12.645 } 00:31:12.645 ] 00:31:12.645 }' 00:31:12.645 04:26:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:12.645 04:26:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:13.212 04:26:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:31:13.472 [2024-07-23 04:26:22.021690] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:13.472 [2024-07-23 04:26:22.021967] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:31:13.472 [2024-07-23 04:26:22.021990] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:13.472 [2024-07-23 04:26:22.022318] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:31:13.472 [2024-07-23 04:26:22.022536] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:31:13.472 [2024-07-23 04:26:22.022556] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:31:13.472 BaseBdev2 00:31:13.472 [2024-07-23 04:26:22.022743] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:13.472 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:31:13.472 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:31:13.472 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:13.472 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:31:13.472 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:13.472 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:13.472 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:13.731 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:13.731 [ 00:31:13.731 { 00:31:13.731 "name": "BaseBdev2", 00:31:13.731 "aliases": [ 00:31:13.731 "beb47e43-61fa-4505-85fc-b01d4f2e33b0" 00:31:13.731 ], 00:31:13.731 "product_name": "Malloc disk", 00:31:13.731 "block_size": 4096, 00:31:13.731 "num_blocks": 8192, 00:31:13.731 "uuid": "beb47e43-61fa-4505-85fc-b01d4f2e33b0", 00:31:13.731 "assigned_rate_limits": { 00:31:13.731 "rw_ios_per_sec": 0, 00:31:13.731 "rw_mbytes_per_sec": 0, 00:31:13.731 "r_mbytes_per_sec": 0, 00:31:13.731 "w_mbytes_per_sec": 0 00:31:13.731 }, 00:31:13.731 "claimed": true, 00:31:13.731 "claim_type": "exclusive_write", 00:31:13.731 "zoned": false, 00:31:13.731 "supported_io_types": { 00:31:13.731 "read": true, 00:31:13.731 "write": true, 00:31:13.731 "unmap": true, 00:31:13.731 "flush": true, 00:31:13.731 "reset": true, 00:31:13.731 "nvme_admin": false, 00:31:13.731 "nvme_io": false, 00:31:13.731 "nvme_io_md": false, 00:31:13.731 "write_zeroes": true, 00:31:13.731 "zcopy": true, 00:31:13.731 "get_zone_info": false, 00:31:13.731 "zone_management": false, 00:31:13.731 "zone_append": false, 00:31:13.731 "compare": false, 00:31:13.731 "compare_and_write": false, 00:31:13.731 "abort": true, 00:31:13.731 "seek_hole": false, 00:31:13.731 "seek_data": false, 00:31:13.731 "copy": true, 00:31:13.731 "nvme_iov_md": false 00:31:13.731 }, 00:31:13.731 "memory_domains": [ 00:31:13.731 { 00:31:13.731 "dma_device_id": "system", 00:31:13.732 "dma_device_type": 1 00:31:13.732 }, 00:31:13.732 { 00:31:13.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:13.732 "dma_device_type": 2 00:31:13.732 } 00:31:13.732 ], 00:31:13.732 "driver_specific": {} 00:31:13.732 } 00:31:13.732 ] 00:31:13.732 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:31:13.732 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:31:13.732 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:13.732 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:31:13.732 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:13.732 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:13.732 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:13.732 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:13.732 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:13.732 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:13.732 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:13.732 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:13.732 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:13.732 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:13.732 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:13.991 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:13.991 "name": "Existed_Raid", 00:31:13.991 "uuid": "b5b95779-ecd5-4daf-811a-5a8cd1a0d88f", 00:31:13.991 "strip_size_kb": 0, 00:31:13.991 "state": "online", 00:31:13.991 "raid_level": "raid1", 00:31:13.991 "superblock": true, 00:31:13.991 "num_base_bdevs": 2, 00:31:13.991 "num_base_bdevs_discovered": 2, 00:31:13.991 "num_base_bdevs_operational": 2, 00:31:13.991 "base_bdevs_list": [ 00:31:13.991 { 00:31:13.991 "name": "BaseBdev1", 00:31:13.991 "uuid": "90518fa5-f57c-40ca-9bf0-20a019b837d7", 00:31:13.991 "is_configured": true, 00:31:13.991 "data_offset": 256, 00:31:13.991 "data_size": 7936 00:31:13.991 }, 00:31:13.991 { 00:31:13.991 "name": "BaseBdev2", 00:31:13.991 "uuid": "beb47e43-61fa-4505-85fc-b01d4f2e33b0", 00:31:13.991 "is_configured": true, 00:31:13.991 "data_offset": 256, 00:31:13.991 "data_size": 7936 00:31:13.991 } 00:31:13.991 ] 00:31:13.991 }' 00:31:13.991 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:13.991 04:26:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:14.557 04:26:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:31:14.557 04:26:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:31:14.557 04:26:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:14.557 04:26:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:14.557 04:26:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:14.557 04:26:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:31:14.557 04:26:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:31:14.557 04:26:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:14.816 [2024-07-23 04:26:23.498053] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:14.816 04:26:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:14.816 "name": "Existed_Raid", 00:31:14.816 "aliases": [ 00:31:14.816 "b5b95779-ecd5-4daf-811a-5a8cd1a0d88f" 00:31:14.816 ], 00:31:14.816 "product_name": "Raid Volume", 00:31:14.816 "block_size": 4096, 00:31:14.816 "num_blocks": 7936, 00:31:14.816 "uuid": "b5b95779-ecd5-4daf-811a-5a8cd1a0d88f", 00:31:14.816 "assigned_rate_limits": { 00:31:14.816 "rw_ios_per_sec": 0, 00:31:14.816 "rw_mbytes_per_sec": 0, 00:31:14.816 "r_mbytes_per_sec": 0, 00:31:14.816 "w_mbytes_per_sec": 0 00:31:14.816 }, 00:31:14.816 "claimed": false, 00:31:14.816 "zoned": false, 00:31:14.816 "supported_io_types": { 00:31:14.816 "read": true, 00:31:14.816 "write": true, 00:31:14.816 "unmap": false, 00:31:14.816 "flush": false, 00:31:14.816 "reset": true, 00:31:14.816 "nvme_admin": false, 00:31:14.816 "nvme_io": false, 00:31:14.816 "nvme_io_md": false, 00:31:14.816 "write_zeroes": true, 00:31:14.816 "zcopy": false, 00:31:14.816 "get_zone_info": false, 00:31:14.816 "zone_management": false, 00:31:14.816 "zone_append": false, 00:31:14.816 "compare": false, 00:31:14.816 "compare_and_write": false, 00:31:14.816 "abort": false, 00:31:14.816 "seek_hole": false, 00:31:14.816 "seek_data": false, 00:31:14.816 "copy": false, 00:31:14.816 "nvme_iov_md": false 00:31:14.816 }, 00:31:14.816 "memory_domains": [ 00:31:14.816 { 00:31:14.816 "dma_device_id": "system", 00:31:14.816 "dma_device_type": 1 00:31:14.816 }, 00:31:14.816 { 00:31:14.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:14.816 "dma_device_type": 2 00:31:14.816 }, 00:31:14.816 { 00:31:14.816 "dma_device_id": "system", 00:31:14.816 "dma_device_type": 1 00:31:14.816 }, 00:31:14.816 { 00:31:14.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:14.816 "dma_device_type": 2 00:31:14.816 } 00:31:14.816 ], 00:31:14.816 "driver_specific": { 00:31:14.816 "raid": { 00:31:14.816 "uuid": "b5b95779-ecd5-4daf-811a-5a8cd1a0d88f", 00:31:14.816 "strip_size_kb": 0, 00:31:14.816 "state": "online", 00:31:14.816 "raid_level": "raid1", 00:31:14.816 "superblock": true, 00:31:14.816 "num_base_bdevs": 2, 00:31:14.816 "num_base_bdevs_discovered": 2, 00:31:14.816 "num_base_bdevs_operational": 2, 00:31:14.816 "base_bdevs_list": [ 00:31:14.816 { 00:31:14.816 "name": "BaseBdev1", 00:31:14.816 "uuid": "90518fa5-f57c-40ca-9bf0-20a019b837d7", 00:31:14.816 "is_configured": true, 00:31:14.816 "data_offset": 256, 00:31:14.816 "data_size": 7936 00:31:14.816 }, 00:31:14.816 { 00:31:14.816 "name": "BaseBdev2", 00:31:14.816 "uuid": "beb47e43-61fa-4505-85fc-b01d4f2e33b0", 00:31:14.816 "is_configured": true, 00:31:14.816 "data_offset": 256, 00:31:14.816 "data_size": 7936 00:31:14.816 } 00:31:14.816 ] 00:31:14.816 } 00:31:14.816 } 00:31:14.816 }' 00:31:14.816 04:26:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:14.816 04:26:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:31:14.816 BaseBdev2' 00:31:14.816 04:26:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:14.816 04:26:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:31:14.816 04:26:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:15.075 04:26:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:15.075 "name": "BaseBdev1", 00:31:15.075 "aliases": [ 00:31:15.075 "90518fa5-f57c-40ca-9bf0-20a019b837d7" 00:31:15.075 ], 00:31:15.075 "product_name": "Malloc disk", 00:31:15.075 "block_size": 4096, 00:31:15.075 "num_blocks": 8192, 00:31:15.075 "uuid": "90518fa5-f57c-40ca-9bf0-20a019b837d7", 00:31:15.075 "assigned_rate_limits": { 00:31:15.075 "rw_ios_per_sec": 0, 00:31:15.075 "rw_mbytes_per_sec": 0, 00:31:15.075 "r_mbytes_per_sec": 0, 00:31:15.075 "w_mbytes_per_sec": 0 00:31:15.075 }, 00:31:15.075 "claimed": true, 00:31:15.075 "claim_type": "exclusive_write", 00:31:15.075 "zoned": false, 00:31:15.075 "supported_io_types": { 00:31:15.075 "read": true, 00:31:15.075 "write": true, 00:31:15.075 "unmap": true, 00:31:15.075 "flush": true, 00:31:15.075 "reset": true, 00:31:15.075 "nvme_admin": false, 00:31:15.075 "nvme_io": false, 00:31:15.075 "nvme_io_md": false, 00:31:15.075 "write_zeroes": true, 00:31:15.075 "zcopy": true, 00:31:15.075 "get_zone_info": false, 00:31:15.075 "zone_management": false, 00:31:15.075 "zone_append": false, 00:31:15.075 "compare": false, 00:31:15.075 "compare_and_write": false, 00:31:15.075 "abort": true, 00:31:15.075 "seek_hole": false, 00:31:15.075 "seek_data": false, 00:31:15.075 "copy": true, 00:31:15.075 "nvme_iov_md": false 00:31:15.075 }, 00:31:15.075 "memory_domains": [ 00:31:15.075 { 00:31:15.075 "dma_device_id": "system", 00:31:15.075 "dma_device_type": 1 00:31:15.075 }, 00:31:15.075 { 00:31:15.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:15.075 "dma_device_type": 2 00:31:15.075 } 00:31:15.075 ], 00:31:15.075 "driver_specific": {} 00:31:15.075 }' 00:31:15.075 04:26:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:15.075 04:26:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:15.333 04:26:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:15.333 04:26:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:15.333 04:26:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:15.333 04:26:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:15.333 04:26:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:15.333 04:26:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:15.333 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:15.333 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:15.333 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:15.333 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:15.333 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:15.333 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:31:15.333 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:15.592 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:15.592 "name": "BaseBdev2", 00:31:15.592 "aliases": [ 00:31:15.592 "beb47e43-61fa-4505-85fc-b01d4f2e33b0" 00:31:15.592 ], 00:31:15.592 "product_name": "Malloc disk", 00:31:15.592 "block_size": 4096, 00:31:15.592 "num_blocks": 8192, 00:31:15.592 "uuid": "beb47e43-61fa-4505-85fc-b01d4f2e33b0", 00:31:15.592 "assigned_rate_limits": { 00:31:15.592 "rw_ios_per_sec": 0, 00:31:15.592 "rw_mbytes_per_sec": 0, 00:31:15.592 "r_mbytes_per_sec": 0, 00:31:15.592 "w_mbytes_per_sec": 0 00:31:15.592 }, 00:31:15.592 "claimed": true, 00:31:15.592 "claim_type": "exclusive_write", 00:31:15.592 "zoned": false, 00:31:15.592 "supported_io_types": { 00:31:15.592 "read": true, 00:31:15.592 "write": true, 00:31:15.592 "unmap": true, 00:31:15.592 "flush": true, 00:31:15.592 "reset": true, 00:31:15.592 "nvme_admin": false, 00:31:15.592 "nvme_io": false, 00:31:15.592 "nvme_io_md": false, 00:31:15.592 "write_zeroes": true, 00:31:15.592 "zcopy": true, 00:31:15.592 "get_zone_info": false, 00:31:15.592 "zone_management": false, 00:31:15.592 "zone_append": false, 00:31:15.592 "compare": false, 00:31:15.592 "compare_and_write": false, 00:31:15.592 "abort": true, 00:31:15.592 "seek_hole": false, 00:31:15.592 "seek_data": false, 00:31:15.592 "copy": true, 00:31:15.592 "nvme_iov_md": false 00:31:15.592 }, 00:31:15.592 "memory_domains": [ 00:31:15.592 { 00:31:15.592 "dma_device_id": "system", 00:31:15.592 "dma_device_type": 1 00:31:15.592 }, 00:31:15.592 { 00:31:15.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:15.592 "dma_device_type": 2 00:31:15.592 } 00:31:15.592 ], 00:31:15.592 "driver_specific": {} 00:31:15.592 }' 00:31:15.592 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:15.852 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:15.852 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:15.852 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:15.852 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:15.852 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:15.852 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:15.852 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:15.852 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:15.852 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:16.138 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:16.138 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:16.138 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:31:16.138 [2024-07-23 04:26:24.897579] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:16.397 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:31:16.397 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:31:16.397 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:31:16.397 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:31:16.397 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:31:16.397 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:31:16.397 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:16.397 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:16.397 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:16.397 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:16.397 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:16.397 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:16.397 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:16.397 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:16.397 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:16.397 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:16.397 04:26:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:16.656 04:26:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:16.656 "name": "Existed_Raid", 00:31:16.656 "uuid": "b5b95779-ecd5-4daf-811a-5a8cd1a0d88f", 00:31:16.656 "strip_size_kb": 0, 00:31:16.656 "state": "online", 00:31:16.656 "raid_level": "raid1", 00:31:16.656 "superblock": true, 00:31:16.656 "num_base_bdevs": 2, 00:31:16.656 "num_base_bdevs_discovered": 1, 00:31:16.656 "num_base_bdevs_operational": 1, 00:31:16.656 "base_bdevs_list": [ 00:31:16.656 { 00:31:16.656 "name": null, 00:31:16.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:16.656 "is_configured": false, 00:31:16.656 "data_offset": 256, 00:31:16.656 "data_size": 7936 00:31:16.656 }, 00:31:16.656 { 00:31:16.656 "name": "BaseBdev2", 00:31:16.656 "uuid": "beb47e43-61fa-4505-85fc-b01d4f2e33b0", 00:31:16.656 "is_configured": true, 00:31:16.656 "data_offset": 256, 00:31:16.656 "data_size": 7936 00:31:16.656 } 00:31:16.656 ] 00:31:16.656 }' 00:31:16.656 04:26:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:16.656 04:26:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:17.223 04:26:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:31:17.223 04:26:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:17.223 04:26:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:17.223 04:26:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:31:17.223 04:26:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:31:17.223 04:26:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:17.223 04:26:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:31:17.482 [2024-07-23 04:26:26.197674] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:17.483 [2024-07-23 04:26:26.197789] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:17.741 [2024-07-23 04:26:26.332327] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:17.741 [2024-07-23 04:26:26.332386] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:17.742 [2024-07-23 04:26:26.332406] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:31:17.742 04:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:31:17.742 04:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:17.742 04:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:17.742 04:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:31:18.001 04:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:31:18.001 04:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:31:18.001 04:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:31:18.001 04:26:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 2802636 00:31:18.001 04:26:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 2802636 ']' 00:31:18.001 04:26:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 2802636 00:31:18.001 04:26:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:31:18.001 04:26:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:18.001 04:26:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2802636 00:31:18.001 04:26:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:18.001 04:26:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:18.001 04:26:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2802636' 00:31:18.001 killing process with pid 2802636 00:31:18.001 04:26:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # kill 2802636 00:31:18.001 [2024-07-23 04:26:26.637513] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:18.001 04:26:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # wait 2802636 00:31:18.001 [2024-07-23 04:26:26.659981] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:19.906 04:26:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:31:19.906 00:31:19.906 real 0m11.850s 00:31:19.906 user 0m19.420s 00:31:19.906 sys 0m2.096s 00:31:19.906 04:26:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:19.906 04:26:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:19.906 ************************************ 00:31:19.906 END TEST raid_state_function_test_sb_4k 00:31:19.906 ************************************ 00:31:19.906 04:26:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:31:19.906 04:26:28 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:31:19.906 04:26:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:31:19.906 04:26:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:19.906 04:26:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:19.906 ************************************ 00:31:19.906 START TEST raid_superblock_test_4k 00:31:19.906 ************************************ 00:31:19.906 04:26:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:31:19.906 04:26:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:31:19.906 04:26:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:31:19.906 04:26:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:31:19.906 04:26:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:31:19.906 04:26:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:31:19.906 04:26:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:31:19.906 04:26:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:31:19.906 04:26:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:31:19.906 04:26:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:31:19.906 04:26:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:31:19.906 04:26:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:31:19.906 04:26:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:31:19.906 04:26:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:31:19.906 04:26:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:31:19.906 04:26:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:31:19.906 04:26:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=2804917 00:31:19.906 04:26:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 2804917 /var/tmp/spdk-raid.sock 00:31:19.906 04:26:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:31:19.906 04:26:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@829 -- # '[' -z 2804917 ']' 00:31:19.907 04:26:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:19.907 04:26:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:19.907 04:26:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:19.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:19.907 04:26:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:19.907 04:26:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:19.907 [2024-07-23 04:26:28.491971] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:19.907 [2024-07-23 04:26:28.492091] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2804917 ] 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:19.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:19.907 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:20.166 [2024-07-23 04:26:28.719059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.426 [2024-07-23 04:26:28.982211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.685 [2024-07-23 04:26:29.285012] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:20.685 [2024-07-23 04:26:29.285046] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:20.685 04:26:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:20.685 04:26:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # return 0 00:31:20.685 04:26:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:31:20.685 04:26:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:20.685 04:26:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:31:20.685 04:26:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:31:20.685 04:26:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:31:20.685 04:26:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:20.685 04:26:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:31:20.685 04:26:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:20.685 04:26:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:31:20.945 malloc1 00:31:20.945 04:26:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:21.204 [2024-07-23 04:26:29.933873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:21.204 [2024-07-23 04:26:29.933934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:21.204 [2024-07-23 04:26:29.933963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:31:21.204 [2024-07-23 04:26:29.933979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:21.204 [2024-07-23 04:26:29.936768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:21.204 [2024-07-23 04:26:29.936805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:21.204 pt1 00:31:21.204 04:26:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:31:21.204 04:26:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:21.204 04:26:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:31:21.204 04:26:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:31:21.204 04:26:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:31:21.204 04:26:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:21.204 04:26:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:31:21.204 04:26:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:21.204 04:26:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:31:21.463 malloc2 00:31:21.463 04:26:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:21.722 [2024-07-23 04:26:30.428192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:21.722 [2024-07-23 04:26:30.428249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:21.722 [2024-07-23 04:26:30.428277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:31:21.722 [2024-07-23 04:26:30.428293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:21.722 [2024-07-23 04:26:30.431111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:21.722 [2024-07-23 04:26:30.431159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:21.722 pt2 00:31:21.722 04:26:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:31:21.722 04:26:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:21.722 04:26:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:31:21.981 [2024-07-23 04:26:30.652808] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:21.981 [2024-07-23 04:26:30.655168] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:21.981 [2024-07-23 04:26:30.655394] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040880 00:31:21.981 [2024-07-23 04:26:30.655418] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:21.981 [2024-07-23 04:26:30.655752] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:31:21.981 [2024-07-23 04:26:30.655992] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040880 00:31:21.981 [2024-07-23 04:26:30.656011] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040880 00:31:21.981 [2024-07-23 04:26:30.656225] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:21.981 04:26:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:21.981 04:26:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:21.982 04:26:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:21.982 04:26:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:21.982 04:26:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:21.982 04:26:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:21.982 04:26:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:21.982 04:26:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:21.982 04:26:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:21.982 04:26:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:21.982 04:26:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:21.982 04:26:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:22.240 04:26:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:22.241 "name": "raid_bdev1", 00:31:22.241 "uuid": "a84bcbc3-8901-4b08-91a2-af3b3d5f2fac", 00:31:22.241 "strip_size_kb": 0, 00:31:22.241 "state": "online", 00:31:22.241 "raid_level": "raid1", 00:31:22.241 "superblock": true, 00:31:22.241 "num_base_bdevs": 2, 00:31:22.241 "num_base_bdevs_discovered": 2, 00:31:22.241 "num_base_bdevs_operational": 2, 00:31:22.241 "base_bdevs_list": [ 00:31:22.241 { 00:31:22.241 "name": "pt1", 00:31:22.241 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:22.241 "is_configured": true, 00:31:22.241 "data_offset": 256, 00:31:22.241 "data_size": 7936 00:31:22.241 }, 00:31:22.241 { 00:31:22.241 "name": "pt2", 00:31:22.241 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:22.241 "is_configured": true, 00:31:22.241 "data_offset": 256, 00:31:22.241 "data_size": 7936 00:31:22.241 } 00:31:22.241 ] 00:31:22.241 }' 00:31:22.241 04:26:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:22.241 04:26:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:22.808 04:26:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:31:22.808 04:26:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:31:22.808 04:26:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:22.808 04:26:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:22.808 04:26:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:22.808 04:26:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:31:22.808 04:26:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:22.809 04:26:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:23.068 [2024-07-23 04:26:31.703910] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:23.068 04:26:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:23.068 "name": "raid_bdev1", 00:31:23.068 "aliases": [ 00:31:23.068 "a84bcbc3-8901-4b08-91a2-af3b3d5f2fac" 00:31:23.068 ], 00:31:23.068 "product_name": "Raid Volume", 00:31:23.068 "block_size": 4096, 00:31:23.068 "num_blocks": 7936, 00:31:23.068 "uuid": "a84bcbc3-8901-4b08-91a2-af3b3d5f2fac", 00:31:23.068 "assigned_rate_limits": { 00:31:23.068 "rw_ios_per_sec": 0, 00:31:23.068 "rw_mbytes_per_sec": 0, 00:31:23.068 "r_mbytes_per_sec": 0, 00:31:23.068 "w_mbytes_per_sec": 0 00:31:23.068 }, 00:31:23.068 "claimed": false, 00:31:23.068 "zoned": false, 00:31:23.068 "supported_io_types": { 00:31:23.068 "read": true, 00:31:23.068 "write": true, 00:31:23.068 "unmap": false, 00:31:23.068 "flush": false, 00:31:23.068 "reset": true, 00:31:23.068 "nvme_admin": false, 00:31:23.068 "nvme_io": false, 00:31:23.068 "nvme_io_md": false, 00:31:23.068 "write_zeroes": true, 00:31:23.068 "zcopy": false, 00:31:23.068 "get_zone_info": false, 00:31:23.068 "zone_management": false, 00:31:23.068 "zone_append": false, 00:31:23.068 "compare": false, 00:31:23.068 "compare_and_write": false, 00:31:23.068 "abort": false, 00:31:23.068 "seek_hole": false, 00:31:23.068 "seek_data": false, 00:31:23.068 "copy": false, 00:31:23.068 "nvme_iov_md": false 00:31:23.068 }, 00:31:23.068 "memory_domains": [ 00:31:23.068 { 00:31:23.068 "dma_device_id": "system", 00:31:23.068 "dma_device_type": 1 00:31:23.068 }, 00:31:23.068 { 00:31:23.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:23.068 "dma_device_type": 2 00:31:23.068 }, 00:31:23.068 { 00:31:23.068 "dma_device_id": "system", 00:31:23.068 "dma_device_type": 1 00:31:23.068 }, 00:31:23.068 { 00:31:23.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:23.068 "dma_device_type": 2 00:31:23.068 } 00:31:23.068 ], 00:31:23.068 "driver_specific": { 00:31:23.068 "raid": { 00:31:23.068 "uuid": "a84bcbc3-8901-4b08-91a2-af3b3d5f2fac", 00:31:23.068 "strip_size_kb": 0, 00:31:23.068 "state": "online", 00:31:23.068 "raid_level": "raid1", 00:31:23.068 "superblock": true, 00:31:23.068 "num_base_bdevs": 2, 00:31:23.068 "num_base_bdevs_discovered": 2, 00:31:23.068 "num_base_bdevs_operational": 2, 00:31:23.068 "base_bdevs_list": [ 00:31:23.068 { 00:31:23.068 "name": "pt1", 00:31:23.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:23.068 "is_configured": true, 00:31:23.068 "data_offset": 256, 00:31:23.068 "data_size": 7936 00:31:23.068 }, 00:31:23.068 { 00:31:23.068 "name": "pt2", 00:31:23.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:23.068 "is_configured": true, 00:31:23.068 "data_offset": 256, 00:31:23.068 "data_size": 7936 00:31:23.068 } 00:31:23.068 ] 00:31:23.068 } 00:31:23.068 } 00:31:23.068 }' 00:31:23.068 04:26:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:23.068 04:26:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:31:23.068 pt2' 00:31:23.068 04:26:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:23.068 04:26:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:31:23.068 04:26:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:23.329 04:26:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:23.329 "name": "pt1", 00:31:23.329 "aliases": [ 00:31:23.329 "00000000-0000-0000-0000-000000000001" 00:31:23.329 ], 00:31:23.329 "product_name": "passthru", 00:31:23.329 "block_size": 4096, 00:31:23.329 "num_blocks": 8192, 00:31:23.329 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:23.329 "assigned_rate_limits": { 00:31:23.329 "rw_ios_per_sec": 0, 00:31:23.329 "rw_mbytes_per_sec": 0, 00:31:23.329 "r_mbytes_per_sec": 0, 00:31:23.329 "w_mbytes_per_sec": 0 00:31:23.329 }, 00:31:23.329 "claimed": true, 00:31:23.329 "claim_type": "exclusive_write", 00:31:23.329 "zoned": false, 00:31:23.329 "supported_io_types": { 00:31:23.329 "read": true, 00:31:23.329 "write": true, 00:31:23.329 "unmap": true, 00:31:23.329 "flush": true, 00:31:23.329 "reset": true, 00:31:23.329 "nvme_admin": false, 00:31:23.329 "nvme_io": false, 00:31:23.329 "nvme_io_md": false, 00:31:23.329 "write_zeroes": true, 00:31:23.329 "zcopy": true, 00:31:23.329 "get_zone_info": false, 00:31:23.329 "zone_management": false, 00:31:23.329 "zone_append": false, 00:31:23.329 "compare": false, 00:31:23.329 "compare_and_write": false, 00:31:23.329 "abort": true, 00:31:23.329 "seek_hole": false, 00:31:23.329 "seek_data": false, 00:31:23.329 "copy": true, 00:31:23.329 "nvme_iov_md": false 00:31:23.329 }, 00:31:23.329 "memory_domains": [ 00:31:23.329 { 00:31:23.329 "dma_device_id": "system", 00:31:23.329 "dma_device_type": 1 00:31:23.329 }, 00:31:23.329 { 00:31:23.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:23.329 "dma_device_type": 2 00:31:23.329 } 00:31:23.329 ], 00:31:23.329 "driver_specific": { 00:31:23.329 "passthru": { 00:31:23.329 "name": "pt1", 00:31:23.329 "base_bdev_name": "malloc1" 00:31:23.329 } 00:31:23.329 } 00:31:23.329 }' 00:31:23.329 04:26:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:23.329 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:23.329 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:23.329 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:23.589 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:23.589 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:23.589 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:23.589 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:23.589 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:23.589 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:23.589 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:23.589 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:23.589 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:23.589 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:31:23.589 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:23.848 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:23.848 "name": "pt2", 00:31:23.848 "aliases": [ 00:31:23.848 "00000000-0000-0000-0000-000000000002" 00:31:23.848 ], 00:31:23.848 "product_name": "passthru", 00:31:23.848 "block_size": 4096, 00:31:23.848 "num_blocks": 8192, 00:31:23.848 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:23.848 "assigned_rate_limits": { 00:31:23.848 "rw_ios_per_sec": 0, 00:31:23.848 "rw_mbytes_per_sec": 0, 00:31:23.848 "r_mbytes_per_sec": 0, 00:31:23.848 "w_mbytes_per_sec": 0 00:31:23.848 }, 00:31:23.848 "claimed": true, 00:31:23.848 "claim_type": "exclusive_write", 00:31:23.848 "zoned": false, 00:31:23.848 "supported_io_types": { 00:31:23.848 "read": true, 00:31:23.848 "write": true, 00:31:23.848 "unmap": true, 00:31:23.848 "flush": true, 00:31:23.848 "reset": true, 00:31:23.848 "nvme_admin": false, 00:31:23.848 "nvme_io": false, 00:31:23.848 "nvme_io_md": false, 00:31:23.848 "write_zeroes": true, 00:31:23.848 "zcopy": true, 00:31:23.848 "get_zone_info": false, 00:31:23.848 "zone_management": false, 00:31:23.848 "zone_append": false, 00:31:23.848 "compare": false, 00:31:23.848 "compare_and_write": false, 00:31:23.848 "abort": true, 00:31:23.848 "seek_hole": false, 00:31:23.848 "seek_data": false, 00:31:23.848 "copy": true, 00:31:23.848 "nvme_iov_md": false 00:31:23.848 }, 00:31:23.848 "memory_domains": [ 00:31:23.848 { 00:31:23.848 "dma_device_id": "system", 00:31:23.848 "dma_device_type": 1 00:31:23.848 }, 00:31:23.848 { 00:31:23.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:23.848 "dma_device_type": 2 00:31:23.848 } 00:31:23.848 ], 00:31:23.848 "driver_specific": { 00:31:23.848 "passthru": { 00:31:23.848 "name": "pt2", 00:31:23.848 "base_bdev_name": "malloc2" 00:31:23.848 } 00:31:23.848 } 00:31:23.848 }' 00:31:23.848 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:23.848 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:23.848 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:23.848 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:24.107 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:24.107 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:24.107 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:24.107 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:24.107 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:24.107 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:24.107 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:24.107 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:24.107 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:24.366 04:26:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:31:24.366 [2024-07-23 04:26:33.087701] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:24.366 04:26:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=a84bcbc3-8901-4b08-91a2-af3b3d5f2fac 00:31:24.366 04:26:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z a84bcbc3-8901-4b08-91a2-af3b3d5f2fac ']' 00:31:24.366 04:26:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:24.625 [2024-07-23 04:26:33.311983] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:24.625 [2024-07-23 04:26:33.312012] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:24.625 [2024-07-23 04:26:33.312094] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:24.625 [2024-07-23 04:26:33.312173] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:24.625 [2024-07-23 04:26:33.312200] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040880 name raid_bdev1, state offline 00:31:24.625 04:26:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:31:24.625 04:26:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:24.884 04:26:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:31:24.884 04:26:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:31:24.884 04:26:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:31:24.884 04:26:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:25.142 04:26:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:31:25.142 04:26:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:25.401 04:26:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:31:25.401 04:26:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:31:25.660 04:26:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:31:25.660 04:26:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:25.660 04:26:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:31:25.660 04:26:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:25.660 04:26:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:25.660 04:26:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:25.660 04:26:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:25.660 04:26:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:25.660 04:26:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:25.660 04:26:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:25.660 04:26:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:25.660 04:26:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:31:25.660 04:26:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:25.660 [2024-07-23 04:26:34.426963] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:25.660 [2024-07-23 04:26:34.429300] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:25.660 [2024-07-23 04:26:34.429377] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:31:25.660 [2024-07-23 04:26:34.429435] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:31:25.660 [2024-07-23 04:26:34.429459] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:25.660 [2024-07-23 04:26:34.429476] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state configuring 00:31:25.660 request: 00:31:25.660 { 00:31:25.660 "name": "raid_bdev1", 00:31:25.660 "raid_level": "raid1", 00:31:25.660 "base_bdevs": [ 00:31:25.660 "malloc1", 00:31:25.660 "malloc2" 00:31:25.660 ], 00:31:25.660 "superblock": false, 00:31:25.660 "method": "bdev_raid_create", 00:31:25.660 "req_id": 1 00:31:25.660 } 00:31:25.660 Got JSON-RPC error response 00:31:25.660 response: 00:31:25.660 { 00:31:25.660 "code": -17, 00:31:25.660 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:31:25.660 } 00:31:25.919 04:26:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:31:25.919 04:26:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:25.919 04:26:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:25.919 04:26:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:25.919 04:26:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:25.919 04:26:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:31:25.919 04:26:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:31:25.919 04:26:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:31:25.919 04:26:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:26.179 [2024-07-23 04:26:34.880101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:26.179 [2024-07-23 04:26:34.880174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:26.179 [2024-07-23 04:26:34.880198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:31:26.179 [2024-07-23 04:26:34.880216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:26.179 [2024-07-23 04:26:34.882991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:26.179 [2024-07-23 04:26:34.883028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:26.179 [2024-07-23 04:26:34.883125] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:26.179 [2024-07-23 04:26:34.883223] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:26.179 pt1 00:31:26.179 04:26:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:31:26.179 04:26:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:26.179 04:26:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:26.179 04:26:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:26.179 04:26:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:26.179 04:26:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:26.179 04:26:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:26.179 04:26:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:26.179 04:26:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:26.179 04:26:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:26.179 04:26:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:26.179 04:26:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:26.439 04:26:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:26.439 "name": "raid_bdev1", 00:31:26.439 "uuid": "a84bcbc3-8901-4b08-91a2-af3b3d5f2fac", 00:31:26.439 "strip_size_kb": 0, 00:31:26.439 "state": "configuring", 00:31:26.439 "raid_level": "raid1", 00:31:26.439 "superblock": true, 00:31:26.439 "num_base_bdevs": 2, 00:31:26.439 "num_base_bdevs_discovered": 1, 00:31:26.439 "num_base_bdevs_operational": 2, 00:31:26.439 "base_bdevs_list": [ 00:31:26.439 { 00:31:26.439 "name": "pt1", 00:31:26.439 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:26.439 "is_configured": true, 00:31:26.439 "data_offset": 256, 00:31:26.439 "data_size": 7936 00:31:26.439 }, 00:31:26.439 { 00:31:26.439 "name": null, 00:31:26.439 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:26.439 "is_configured": false, 00:31:26.439 "data_offset": 256, 00:31:26.439 "data_size": 7936 00:31:26.439 } 00:31:26.439 ] 00:31:26.439 }' 00:31:26.439 04:26:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:26.439 04:26:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:27.007 04:26:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:31:27.007 04:26:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:31:27.007 04:26:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:31:27.007 04:26:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:27.266 [2024-07-23 04:26:35.895058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:27.266 [2024-07-23 04:26:35.895127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:27.266 [2024-07-23 04:26:35.895164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041d80 00:31:27.266 [2024-07-23 04:26:35.895182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:27.266 [2024-07-23 04:26:35.895761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:27.266 [2024-07-23 04:26:35.895790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:27.266 [2024-07-23 04:26:35.895888] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:27.266 [2024-07-23 04:26:35.895926] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:27.266 [2024-07-23 04:26:35.896093] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:31:27.266 [2024-07-23 04:26:35.896111] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:27.266 [2024-07-23 04:26:35.896418] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:31:27.266 [2024-07-23 04:26:35.896634] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:31:27.266 [2024-07-23 04:26:35.896648] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:31:27.266 [2024-07-23 04:26:35.896841] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:27.266 pt2 00:31:27.266 04:26:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:31:27.266 04:26:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:31:27.266 04:26:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:27.266 04:26:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:27.266 04:26:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:27.266 04:26:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:27.266 04:26:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:27.266 04:26:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:27.266 04:26:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:27.266 04:26:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:27.266 04:26:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:27.266 04:26:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:27.266 04:26:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:27.266 04:26:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:27.525 04:26:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:27.525 "name": "raid_bdev1", 00:31:27.525 "uuid": "a84bcbc3-8901-4b08-91a2-af3b3d5f2fac", 00:31:27.525 "strip_size_kb": 0, 00:31:27.525 "state": "online", 00:31:27.525 "raid_level": "raid1", 00:31:27.525 "superblock": true, 00:31:27.525 "num_base_bdevs": 2, 00:31:27.525 "num_base_bdevs_discovered": 2, 00:31:27.525 "num_base_bdevs_operational": 2, 00:31:27.525 "base_bdevs_list": [ 00:31:27.525 { 00:31:27.525 "name": "pt1", 00:31:27.525 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:27.525 "is_configured": true, 00:31:27.525 "data_offset": 256, 00:31:27.525 "data_size": 7936 00:31:27.525 }, 00:31:27.525 { 00:31:27.525 "name": "pt2", 00:31:27.525 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:27.525 "is_configured": true, 00:31:27.525 "data_offset": 256, 00:31:27.525 "data_size": 7936 00:31:27.525 } 00:31:27.525 ] 00:31:27.525 }' 00:31:27.525 04:26:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:27.525 04:26:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:28.093 04:26:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:31:28.093 04:26:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:31:28.093 04:26:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:28.093 04:26:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:28.093 04:26:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:28.093 04:26:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:31:28.093 04:26:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:28.093 04:26:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:28.351 [2024-07-23 04:26:36.942215] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:28.351 04:26:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:28.351 "name": "raid_bdev1", 00:31:28.351 "aliases": [ 00:31:28.351 "a84bcbc3-8901-4b08-91a2-af3b3d5f2fac" 00:31:28.351 ], 00:31:28.351 "product_name": "Raid Volume", 00:31:28.351 "block_size": 4096, 00:31:28.351 "num_blocks": 7936, 00:31:28.351 "uuid": "a84bcbc3-8901-4b08-91a2-af3b3d5f2fac", 00:31:28.351 "assigned_rate_limits": { 00:31:28.351 "rw_ios_per_sec": 0, 00:31:28.351 "rw_mbytes_per_sec": 0, 00:31:28.351 "r_mbytes_per_sec": 0, 00:31:28.351 "w_mbytes_per_sec": 0 00:31:28.351 }, 00:31:28.351 "claimed": false, 00:31:28.351 "zoned": false, 00:31:28.351 "supported_io_types": { 00:31:28.351 "read": true, 00:31:28.351 "write": true, 00:31:28.351 "unmap": false, 00:31:28.351 "flush": false, 00:31:28.351 "reset": true, 00:31:28.351 "nvme_admin": false, 00:31:28.351 "nvme_io": false, 00:31:28.351 "nvme_io_md": false, 00:31:28.351 "write_zeroes": true, 00:31:28.351 "zcopy": false, 00:31:28.351 "get_zone_info": false, 00:31:28.351 "zone_management": false, 00:31:28.351 "zone_append": false, 00:31:28.351 "compare": false, 00:31:28.351 "compare_and_write": false, 00:31:28.351 "abort": false, 00:31:28.351 "seek_hole": false, 00:31:28.351 "seek_data": false, 00:31:28.351 "copy": false, 00:31:28.351 "nvme_iov_md": false 00:31:28.351 }, 00:31:28.351 "memory_domains": [ 00:31:28.351 { 00:31:28.351 "dma_device_id": "system", 00:31:28.351 "dma_device_type": 1 00:31:28.351 }, 00:31:28.351 { 00:31:28.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:28.351 "dma_device_type": 2 00:31:28.351 }, 00:31:28.351 { 00:31:28.351 "dma_device_id": "system", 00:31:28.351 "dma_device_type": 1 00:31:28.351 }, 00:31:28.351 { 00:31:28.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:28.351 "dma_device_type": 2 00:31:28.351 } 00:31:28.351 ], 00:31:28.351 "driver_specific": { 00:31:28.351 "raid": { 00:31:28.351 "uuid": "a84bcbc3-8901-4b08-91a2-af3b3d5f2fac", 00:31:28.351 "strip_size_kb": 0, 00:31:28.351 "state": "online", 00:31:28.351 "raid_level": "raid1", 00:31:28.351 "superblock": true, 00:31:28.351 "num_base_bdevs": 2, 00:31:28.351 "num_base_bdevs_discovered": 2, 00:31:28.351 "num_base_bdevs_operational": 2, 00:31:28.351 "base_bdevs_list": [ 00:31:28.351 { 00:31:28.351 "name": "pt1", 00:31:28.351 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:28.351 "is_configured": true, 00:31:28.351 "data_offset": 256, 00:31:28.351 "data_size": 7936 00:31:28.351 }, 00:31:28.351 { 00:31:28.351 "name": "pt2", 00:31:28.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:28.351 "is_configured": true, 00:31:28.351 "data_offset": 256, 00:31:28.351 "data_size": 7936 00:31:28.351 } 00:31:28.351 ] 00:31:28.351 } 00:31:28.351 } 00:31:28.351 }' 00:31:28.351 04:26:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:28.351 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:31:28.351 pt2' 00:31:28.351 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:28.351 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:31:28.351 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:28.610 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:28.610 "name": "pt1", 00:31:28.610 "aliases": [ 00:31:28.610 "00000000-0000-0000-0000-000000000001" 00:31:28.610 ], 00:31:28.610 "product_name": "passthru", 00:31:28.610 "block_size": 4096, 00:31:28.610 "num_blocks": 8192, 00:31:28.610 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:28.610 "assigned_rate_limits": { 00:31:28.610 "rw_ios_per_sec": 0, 00:31:28.610 "rw_mbytes_per_sec": 0, 00:31:28.610 "r_mbytes_per_sec": 0, 00:31:28.610 "w_mbytes_per_sec": 0 00:31:28.610 }, 00:31:28.610 "claimed": true, 00:31:28.610 "claim_type": "exclusive_write", 00:31:28.610 "zoned": false, 00:31:28.610 "supported_io_types": { 00:31:28.610 "read": true, 00:31:28.610 "write": true, 00:31:28.610 "unmap": true, 00:31:28.610 "flush": true, 00:31:28.610 "reset": true, 00:31:28.610 "nvme_admin": false, 00:31:28.610 "nvme_io": false, 00:31:28.610 "nvme_io_md": false, 00:31:28.610 "write_zeroes": true, 00:31:28.610 "zcopy": true, 00:31:28.610 "get_zone_info": false, 00:31:28.610 "zone_management": false, 00:31:28.610 "zone_append": false, 00:31:28.610 "compare": false, 00:31:28.610 "compare_and_write": false, 00:31:28.610 "abort": true, 00:31:28.610 "seek_hole": false, 00:31:28.610 "seek_data": false, 00:31:28.610 "copy": true, 00:31:28.610 "nvme_iov_md": false 00:31:28.610 }, 00:31:28.610 "memory_domains": [ 00:31:28.610 { 00:31:28.610 "dma_device_id": "system", 00:31:28.610 "dma_device_type": 1 00:31:28.610 }, 00:31:28.610 { 00:31:28.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:28.610 "dma_device_type": 2 00:31:28.610 } 00:31:28.610 ], 00:31:28.610 "driver_specific": { 00:31:28.610 "passthru": { 00:31:28.610 "name": "pt1", 00:31:28.610 "base_bdev_name": "malloc1" 00:31:28.610 } 00:31:28.610 } 00:31:28.610 }' 00:31:28.610 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:28.610 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:28.610 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:28.610 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:28.610 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:28.869 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:28.869 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:28.869 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:28.869 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:28.869 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:28.869 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:28.869 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:28.869 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:28.869 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:31:28.869 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:29.129 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:29.129 "name": "pt2", 00:31:29.129 "aliases": [ 00:31:29.129 "00000000-0000-0000-0000-000000000002" 00:31:29.129 ], 00:31:29.129 "product_name": "passthru", 00:31:29.129 "block_size": 4096, 00:31:29.129 "num_blocks": 8192, 00:31:29.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:29.129 "assigned_rate_limits": { 00:31:29.129 "rw_ios_per_sec": 0, 00:31:29.129 "rw_mbytes_per_sec": 0, 00:31:29.129 "r_mbytes_per_sec": 0, 00:31:29.129 "w_mbytes_per_sec": 0 00:31:29.129 }, 00:31:29.129 "claimed": true, 00:31:29.129 "claim_type": "exclusive_write", 00:31:29.129 "zoned": false, 00:31:29.129 "supported_io_types": { 00:31:29.129 "read": true, 00:31:29.129 "write": true, 00:31:29.129 "unmap": true, 00:31:29.129 "flush": true, 00:31:29.129 "reset": true, 00:31:29.129 "nvme_admin": false, 00:31:29.129 "nvme_io": false, 00:31:29.129 "nvme_io_md": false, 00:31:29.129 "write_zeroes": true, 00:31:29.129 "zcopy": true, 00:31:29.129 "get_zone_info": false, 00:31:29.129 "zone_management": false, 00:31:29.129 "zone_append": false, 00:31:29.129 "compare": false, 00:31:29.129 "compare_and_write": false, 00:31:29.129 "abort": true, 00:31:29.129 "seek_hole": false, 00:31:29.129 "seek_data": false, 00:31:29.129 "copy": true, 00:31:29.129 "nvme_iov_md": false 00:31:29.129 }, 00:31:29.129 "memory_domains": [ 00:31:29.129 { 00:31:29.129 "dma_device_id": "system", 00:31:29.129 "dma_device_type": 1 00:31:29.129 }, 00:31:29.129 { 00:31:29.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:29.129 "dma_device_type": 2 00:31:29.129 } 00:31:29.129 ], 00:31:29.129 "driver_specific": { 00:31:29.129 "passthru": { 00:31:29.129 "name": "pt2", 00:31:29.129 "base_bdev_name": "malloc2" 00:31:29.129 } 00:31:29.129 } 00:31:29.129 }' 00:31:29.129 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:29.129 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:29.129 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:29.129 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:29.388 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:29.388 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:29.388 04:26:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:29.388 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:29.388 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:29.388 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:29.388 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:29.388 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:29.388 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:29.388 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:31:29.646 [2024-07-23 04:26:38.329989] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:29.646 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' a84bcbc3-8901-4b08-91a2-af3b3d5f2fac '!=' a84bcbc3-8901-4b08-91a2-af3b3d5f2fac ']' 00:31:29.646 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:31:29.646 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:31:29.646 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:31:29.646 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:29.905 [2024-07-23 04:26:38.558283] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:31:29.905 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:29.905 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:29.905 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:29.905 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:29.905 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:29.905 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:29.905 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:29.905 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:29.905 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:29.905 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:29.905 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:29.905 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:30.164 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:30.164 "name": "raid_bdev1", 00:31:30.164 "uuid": "a84bcbc3-8901-4b08-91a2-af3b3d5f2fac", 00:31:30.164 "strip_size_kb": 0, 00:31:30.164 "state": "online", 00:31:30.164 "raid_level": "raid1", 00:31:30.164 "superblock": true, 00:31:30.164 "num_base_bdevs": 2, 00:31:30.164 "num_base_bdevs_discovered": 1, 00:31:30.164 "num_base_bdevs_operational": 1, 00:31:30.164 "base_bdevs_list": [ 00:31:30.164 { 00:31:30.164 "name": null, 00:31:30.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:30.164 "is_configured": false, 00:31:30.164 "data_offset": 256, 00:31:30.164 "data_size": 7936 00:31:30.164 }, 00:31:30.164 { 00:31:30.164 "name": "pt2", 00:31:30.164 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:30.164 "is_configured": true, 00:31:30.164 "data_offset": 256, 00:31:30.164 "data_size": 7936 00:31:30.164 } 00:31:30.164 ] 00:31:30.164 }' 00:31:30.164 04:26:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:30.164 04:26:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:30.763 04:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:31.022 [2024-07-23 04:26:39.585067] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:31.022 [2024-07-23 04:26:39.585099] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:31.022 [2024-07-23 04:26:39.585189] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:31.022 [2024-07-23 04:26:39.585249] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:31.022 [2024-07-23 04:26:39.585268] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:31:31.022 04:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:31.022 04:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:31:31.281 04:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:31:31.281 04:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:31:31.281 04:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:31:31.281 04:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:31:31.281 04:26:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:31.281 04:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:31:31.281 04:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:31:31.281 04:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:31:31.281 04:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:31:31.281 04:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:31:31.281 04:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:31.540 [2024-07-23 04:26:40.262871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:31.540 [2024-07-23 04:26:40.262946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:31.540 [2024-07-23 04:26:40.262970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:31:31.540 [2024-07-23 04:26:40.262988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:31.540 [2024-07-23 04:26:40.265779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:31.540 [2024-07-23 04:26:40.265818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:31.540 [2024-07-23 04:26:40.265916] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:31.540 [2024-07-23 04:26:40.265996] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:31.540 [2024-07-23 04:26:40.266163] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042680 00:31:31.540 [2024-07-23 04:26:40.266182] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:31.540 [2024-07-23 04:26:40.266485] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:31:31.540 [2024-07-23 04:26:40.266730] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042680 00:31:31.540 [2024-07-23 04:26:40.266745] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042680 00:31:31.540 [2024-07-23 04:26:40.266957] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:31.540 pt2 00:31:31.540 04:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:31.540 04:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:31.540 04:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:31.540 04:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:31.540 04:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:31.540 04:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:31.540 04:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:31.540 04:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:31.540 04:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:31.540 04:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:31.540 04:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:31.540 04:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:31.799 04:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:31.799 "name": "raid_bdev1", 00:31:31.799 "uuid": "a84bcbc3-8901-4b08-91a2-af3b3d5f2fac", 00:31:31.799 "strip_size_kb": 0, 00:31:31.799 "state": "online", 00:31:31.799 "raid_level": "raid1", 00:31:31.799 "superblock": true, 00:31:31.799 "num_base_bdevs": 2, 00:31:31.799 "num_base_bdevs_discovered": 1, 00:31:31.799 "num_base_bdevs_operational": 1, 00:31:31.799 "base_bdevs_list": [ 00:31:31.799 { 00:31:31.799 "name": null, 00:31:31.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:31.799 "is_configured": false, 00:31:31.799 "data_offset": 256, 00:31:31.799 "data_size": 7936 00:31:31.799 }, 00:31:31.799 { 00:31:31.799 "name": "pt2", 00:31:31.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:31.799 "is_configured": true, 00:31:31.799 "data_offset": 256, 00:31:31.799 "data_size": 7936 00:31:31.799 } 00:31:31.799 ] 00:31:31.799 }' 00:31:31.799 04:26:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:31.799 04:26:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:32.368 04:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:32.629 [2024-07-23 04:26:41.310000] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:32.629 [2024-07-23 04:26:41.310036] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:32.629 [2024-07-23 04:26:41.310115] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:32.629 [2024-07-23 04:26:41.310184] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:32.629 [2024-07-23 04:26:41.310201] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state offline 00:31:32.629 04:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:32.629 04:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:31:32.888 04:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:31:32.888 04:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:31:32.888 04:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:31:32.888 04:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:33.147 [2024-07-23 04:26:41.767220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:33.147 [2024-07-23 04:26:41.767280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:33.147 [2024-07-23 04:26:41.767306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:31:33.147 [2024-07-23 04:26:41.767321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:33.147 [2024-07-23 04:26:41.770168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:33.147 [2024-07-23 04:26:41.770203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:33.147 [2024-07-23 04:26:41.770296] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:33.147 [2024-07-23 04:26:41.770396] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:33.147 [2024-07-23 04:26:41.770615] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:31:33.147 [2024-07-23 04:26:41.770636] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:33.147 [2024-07-23 04:26:41.770663] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042f80 name raid_bdev1, state configuring 00:31:33.147 [2024-07-23 04:26:41.770756] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:33.147 [2024-07-23 04:26:41.770850] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:31:33.147 [2024-07-23 04:26:41.770864] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:33.147 [2024-07-23 04:26:41.771178] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:31:33.147 [2024-07-23 04:26:41.771392] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:31:33.147 [2024-07-23 04:26:41.771410] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:31:33.147 [2024-07-23 04:26:41.771632] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:33.147 pt1 00:31:33.147 04:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:31:33.147 04:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:33.147 04:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:33.147 04:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:33.147 04:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:33.147 04:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:33.147 04:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:33.147 04:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:33.147 04:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:33.147 04:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:33.147 04:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:33.147 04:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:33.147 04:26:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:33.406 04:26:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:33.406 "name": "raid_bdev1", 00:31:33.406 "uuid": "a84bcbc3-8901-4b08-91a2-af3b3d5f2fac", 00:31:33.406 "strip_size_kb": 0, 00:31:33.406 "state": "online", 00:31:33.406 "raid_level": "raid1", 00:31:33.406 "superblock": true, 00:31:33.406 "num_base_bdevs": 2, 00:31:33.406 "num_base_bdevs_discovered": 1, 00:31:33.406 "num_base_bdevs_operational": 1, 00:31:33.406 "base_bdevs_list": [ 00:31:33.406 { 00:31:33.406 "name": null, 00:31:33.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:33.406 "is_configured": false, 00:31:33.406 "data_offset": 256, 00:31:33.406 "data_size": 7936 00:31:33.406 }, 00:31:33.406 { 00:31:33.406 "name": "pt2", 00:31:33.406 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:33.406 "is_configured": true, 00:31:33.406 "data_offset": 256, 00:31:33.406 "data_size": 7936 00:31:33.406 } 00:31:33.406 ] 00:31:33.406 }' 00:31:33.406 04:26:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:33.406 04:26:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:33.973 04:26:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:31:33.973 04:26:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:34.233 04:26:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:31:34.233 04:26:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:34.233 04:26:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:31:34.492 [2024-07-23 04:26:43.023424] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:34.492 04:26:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' a84bcbc3-8901-4b08-91a2-af3b3d5f2fac '!=' a84bcbc3-8901-4b08-91a2-af3b3d5f2fac ']' 00:31:34.492 04:26:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 2804917 00:31:34.492 04:26:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@948 -- # '[' -z 2804917 ']' 00:31:34.492 04:26:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # kill -0 2804917 00:31:34.492 04:26:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # uname 00:31:34.492 04:26:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:34.492 04:26:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2804917 00:31:34.492 04:26:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:34.492 04:26:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:34.492 04:26:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2804917' 00:31:34.492 killing process with pid 2804917 00:31:34.492 04:26:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # kill 2804917 00:31:34.492 [2024-07-23 04:26:43.103119] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:34.492 [2024-07-23 04:26:43.103221] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:34.492 [2024-07-23 04:26:43.103286] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:34.492 04:26:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # wait 2804917 00:31:34.492 [2024-07-23 04:26:43.103306] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:31:34.751 [2024-07-23 04:26:43.308150] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:36.655 04:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:31:36.655 00:31:36.655 real 0m16.686s 00:31:36.655 user 0m28.426s 00:31:36.655 sys 0m2.984s 00:31:36.655 04:26:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:36.655 04:26:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:36.655 ************************************ 00:31:36.655 END TEST raid_superblock_test_4k 00:31:36.655 ************************************ 00:31:36.655 04:26:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:31:36.655 04:26:45 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' true = true ']' 00:31:36.655 04:26:45 bdev_raid -- bdev/bdev_raid.sh@901 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:31:36.655 04:26:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:31:36.655 04:26:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:36.655 04:26:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:36.655 ************************************ 00:31:36.655 START TEST raid_rebuild_test_sb_4k 00:31:36.655 ************************************ 00:31:36.655 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:31:36.655 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:31:36.655 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:31:36.655 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:31:36.655 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:31:36.655 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local verify=true 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local strip_size 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local create_arg 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local data_offset 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # raid_pid=2807919 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # waitforlisten 2807919 /var/tmp/spdk-raid.sock 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 2807919 ']' 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:36.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:36.656 04:26:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:36.656 [2024-07-23 04:26:45.273051] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:36.656 [2024-07-23 04:26:45.273181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2807919 ] 00:31:36.656 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:36.656 Zero copy mechanism will not be used. 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:36.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.656 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:36.915 [2024-07-23 04:26:45.498512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.174 [2024-07-23 04:26:45.786386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.433 [2024-07-23 04:26:46.118861] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:37.433 [2024-07-23 04:26:46.118896] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:37.692 04:26:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:37.692 04:26:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:31:37.692 04:26:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:37.692 04:26:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:31:37.951 BaseBdev1_malloc 00:31:37.951 04:26:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:38.210 [2024-07-23 04:26:46.771759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:38.210 [2024-07-23 04:26:46.771821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:38.210 [2024-07-23 04:26:46.771852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:31:38.210 [2024-07-23 04:26:46.771875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:38.210 [2024-07-23 04:26:46.774631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:38.210 [2024-07-23 04:26:46.774672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:38.210 BaseBdev1 00:31:38.210 04:26:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:38.210 04:26:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:31:38.469 BaseBdev2_malloc 00:31:38.469 04:26:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:38.728 [2024-07-23 04:26:47.280289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:38.728 [2024-07-23 04:26:47.280344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:38.728 [2024-07-23 04:26:47.280372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:31:38.728 [2024-07-23 04:26:47.280393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:38.728 [2024-07-23 04:26:47.283080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:38.728 [2024-07-23 04:26:47.283117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:38.728 BaseBdev2 00:31:38.728 04:26:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:31:38.987 spare_malloc 00:31:38.987 04:26:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:39.246 spare_delay 00:31:39.246 04:26:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:39.505 [2024-07-23 04:26:48.034783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:39.505 [2024-07-23 04:26:48.034844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:39.505 [2024-07-23 04:26:48.034872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:31:39.505 [2024-07-23 04:26:48.034890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:39.505 [2024-07-23 04:26:48.037692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:39.505 [2024-07-23 04:26:48.037732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:39.505 spare 00:31:39.505 04:26:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:31:39.505 [2024-07-23 04:26:48.247372] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:39.505 [2024-07-23 04:26:48.249705] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:39.505 [2024-07-23 04:26:48.249946] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:31:39.505 [2024-07-23 04:26:48.249971] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:39.505 [2024-07-23 04:26:48.250342] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:31:39.505 [2024-07-23 04:26:48.250608] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:31:39.505 [2024-07-23 04:26:48.250624] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:31:39.505 [2024-07-23 04:26:48.250827] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:39.505 04:26:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:39.505 04:26:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:39.505 04:26:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:39.505 04:26:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:39.505 04:26:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:39.506 04:26:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:39.506 04:26:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:39.506 04:26:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:39.506 04:26:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:39.506 04:26:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:39.506 04:26:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:39.506 04:26:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:39.765 04:26:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:39.765 "name": "raid_bdev1", 00:31:39.765 "uuid": "0bbc8042-18a0-4856-a10a-c0c055b37ea9", 00:31:39.765 "strip_size_kb": 0, 00:31:39.765 "state": "online", 00:31:39.765 "raid_level": "raid1", 00:31:39.765 "superblock": true, 00:31:39.765 "num_base_bdevs": 2, 00:31:39.765 "num_base_bdevs_discovered": 2, 00:31:39.765 "num_base_bdevs_operational": 2, 00:31:39.765 "base_bdevs_list": [ 00:31:39.765 { 00:31:39.765 "name": "BaseBdev1", 00:31:39.765 "uuid": "1419fcd0-8b23-5256-b074-55bf5ae49332", 00:31:39.765 "is_configured": true, 00:31:39.765 "data_offset": 256, 00:31:39.765 "data_size": 7936 00:31:39.765 }, 00:31:39.765 { 00:31:39.765 "name": "BaseBdev2", 00:31:39.765 "uuid": "93fb2531-8468-57f6-b6e6-9dfaf91b0b29", 00:31:39.765 "is_configured": true, 00:31:39.765 "data_offset": 256, 00:31:39.765 "data_size": 7936 00:31:39.765 } 00:31:39.765 ] 00:31:39.765 }' 00:31:39.765 04:26:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:39.765 04:26:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:40.333 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:40.333 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:31:40.593 [2024-07-23 04:26:49.274491] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:40.593 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:31:40.593 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:40.593 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:40.852 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:31:40.852 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:31:40.852 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:31:40.852 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:31:40.852 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:31:40.852 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:40.852 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:40.852 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:40.852 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:40.852 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:40.852 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:31:40.852 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:40.852 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:40.852 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:41.112 [2024-07-23 04:26:49.719387] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:31:41.112 /dev/nbd0 00:31:41.112 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:41.112 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:41.112 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:41.112 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:31:41.112 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:41.112 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:41.112 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:41.112 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:31:41.112 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:41.112 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:41.112 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:41.112 1+0 records in 00:31:41.112 1+0 records out 00:31:41.112 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262056 s, 15.6 MB/s 00:31:41.112 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:41.112 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:31:41.112 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:41.112 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:41.112 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:31:41.112 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:41.112 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:41.112 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:31:41.112 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:31:41.112 04:26:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:31:42.049 7936+0 records in 00:31:42.049 7936+0 records out 00:31:42.049 32505856 bytes (33 MB, 31 MiB) copied, 0.810593 s, 40.1 MB/s 00:31:42.049 04:26:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:42.049 04:26:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:42.049 04:26:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:42.049 04:26:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:42.049 04:26:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:31:42.049 04:26:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:42.050 04:26:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:42.309 04:26:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:42.309 [2024-07-23 04:26:50.837474] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:42.309 04:26:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:42.309 04:26:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:42.309 04:26:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:42.309 04:26:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:42.309 04:26:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:42.309 04:26:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:31:42.309 04:26:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:31:42.309 04:26:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:42.309 [2024-07-23 04:26:51.050126] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:42.309 04:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:42.309 04:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:42.309 04:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:42.309 04:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:42.309 04:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:42.309 04:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:42.309 04:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:42.309 04:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:42.309 04:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:42.309 04:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:42.309 04:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:42.309 04:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:42.568 04:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:42.568 "name": "raid_bdev1", 00:31:42.568 "uuid": "0bbc8042-18a0-4856-a10a-c0c055b37ea9", 00:31:42.569 "strip_size_kb": 0, 00:31:42.569 "state": "online", 00:31:42.569 "raid_level": "raid1", 00:31:42.569 "superblock": true, 00:31:42.569 "num_base_bdevs": 2, 00:31:42.569 "num_base_bdevs_discovered": 1, 00:31:42.569 "num_base_bdevs_operational": 1, 00:31:42.569 "base_bdevs_list": [ 00:31:42.569 { 00:31:42.569 "name": null, 00:31:42.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.569 "is_configured": false, 00:31:42.569 "data_offset": 256, 00:31:42.569 "data_size": 7936 00:31:42.569 }, 00:31:42.569 { 00:31:42.569 "name": "BaseBdev2", 00:31:42.569 "uuid": "93fb2531-8468-57f6-b6e6-9dfaf91b0b29", 00:31:42.569 "is_configured": true, 00:31:42.569 "data_offset": 256, 00:31:42.569 "data_size": 7936 00:31:42.569 } 00:31:42.569 ] 00:31:42.569 }' 00:31:42.569 04:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:42.569 04:26:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:43.137 04:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:43.396 [2024-07-23 04:26:52.057029] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:43.396 [2024-07-23 04:26:52.083851] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001a4410 00:31:43.396 [2024-07-23 04:26:52.086165] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:43.396 04:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # sleep 1 00:31:44.333 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:44.333 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:44.333 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:44.333 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:44.333 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:44.333 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:44.333 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:44.592 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:44.592 "name": "raid_bdev1", 00:31:44.592 "uuid": "0bbc8042-18a0-4856-a10a-c0c055b37ea9", 00:31:44.592 "strip_size_kb": 0, 00:31:44.592 "state": "online", 00:31:44.592 "raid_level": "raid1", 00:31:44.592 "superblock": true, 00:31:44.592 "num_base_bdevs": 2, 00:31:44.592 "num_base_bdevs_discovered": 2, 00:31:44.592 "num_base_bdevs_operational": 2, 00:31:44.592 "process": { 00:31:44.592 "type": "rebuild", 00:31:44.592 "target": "spare", 00:31:44.592 "progress": { 00:31:44.592 "blocks": 3072, 00:31:44.592 "percent": 38 00:31:44.592 } 00:31:44.592 }, 00:31:44.593 "base_bdevs_list": [ 00:31:44.593 { 00:31:44.593 "name": "spare", 00:31:44.593 "uuid": "f7313425-c5a9-57ce-8aff-8a8631f8a3fb", 00:31:44.593 "is_configured": true, 00:31:44.593 "data_offset": 256, 00:31:44.593 "data_size": 7936 00:31:44.593 }, 00:31:44.593 { 00:31:44.593 "name": "BaseBdev2", 00:31:44.593 "uuid": "93fb2531-8468-57f6-b6e6-9dfaf91b0b29", 00:31:44.593 "is_configured": true, 00:31:44.593 "data_offset": 256, 00:31:44.593 "data_size": 7936 00:31:44.593 } 00:31:44.593 ] 00:31:44.593 }' 00:31:44.593 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:44.851 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:44.851 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:44.851 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:44.851 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:45.110 [2024-07-23 04:26:53.635572] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:45.110 [2024-07-23 04:26:53.699212] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:45.110 [2024-07-23 04:26:53.699273] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:45.110 [2024-07-23 04:26:53.699295] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:45.110 [2024-07-23 04:26:53.699319] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:45.110 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:45.110 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:45.110 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:45.110 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:45.110 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:45.110 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:45.110 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:45.110 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:45.110 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:45.110 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:45.110 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:45.110 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:45.369 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:45.369 "name": "raid_bdev1", 00:31:45.369 "uuid": "0bbc8042-18a0-4856-a10a-c0c055b37ea9", 00:31:45.369 "strip_size_kb": 0, 00:31:45.369 "state": "online", 00:31:45.369 "raid_level": "raid1", 00:31:45.369 "superblock": true, 00:31:45.369 "num_base_bdevs": 2, 00:31:45.369 "num_base_bdevs_discovered": 1, 00:31:45.369 "num_base_bdevs_operational": 1, 00:31:45.369 "base_bdevs_list": [ 00:31:45.369 { 00:31:45.369 "name": null, 00:31:45.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:45.369 "is_configured": false, 00:31:45.369 "data_offset": 256, 00:31:45.369 "data_size": 7936 00:31:45.369 }, 00:31:45.369 { 00:31:45.369 "name": "BaseBdev2", 00:31:45.369 "uuid": "93fb2531-8468-57f6-b6e6-9dfaf91b0b29", 00:31:45.369 "is_configured": true, 00:31:45.369 "data_offset": 256, 00:31:45.369 "data_size": 7936 00:31:45.369 } 00:31:45.369 ] 00:31:45.369 }' 00:31:45.369 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:45.369 04:26:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:45.976 04:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:45.976 04:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:45.976 04:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:45.976 04:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:45.976 04:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:45.976 04:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:45.976 04:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:46.235 04:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:46.235 "name": "raid_bdev1", 00:31:46.235 "uuid": "0bbc8042-18a0-4856-a10a-c0c055b37ea9", 00:31:46.235 "strip_size_kb": 0, 00:31:46.235 "state": "online", 00:31:46.235 "raid_level": "raid1", 00:31:46.235 "superblock": true, 00:31:46.235 "num_base_bdevs": 2, 00:31:46.235 "num_base_bdevs_discovered": 1, 00:31:46.235 "num_base_bdevs_operational": 1, 00:31:46.235 "base_bdevs_list": [ 00:31:46.235 { 00:31:46.235 "name": null, 00:31:46.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:46.235 "is_configured": false, 00:31:46.235 "data_offset": 256, 00:31:46.235 "data_size": 7936 00:31:46.235 }, 00:31:46.235 { 00:31:46.235 "name": "BaseBdev2", 00:31:46.235 "uuid": "93fb2531-8468-57f6-b6e6-9dfaf91b0b29", 00:31:46.235 "is_configured": true, 00:31:46.235 "data_offset": 256, 00:31:46.235 "data_size": 7936 00:31:46.235 } 00:31:46.235 ] 00:31:46.235 }' 00:31:46.235 04:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:46.235 04:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:46.235 04:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:46.235 04:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:46.235 04:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:46.494 [2024-07-23 04:26:55.076159] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:46.494 [2024-07-23 04:26:55.100455] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001a44e0 00:31:46.494 [2024-07-23 04:26:55.102750] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:46.494 04:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:47.432 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:47.432 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:47.432 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:47.432 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:47.432 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:47.432 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:47.432 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:47.691 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:47.691 "name": "raid_bdev1", 00:31:47.691 "uuid": "0bbc8042-18a0-4856-a10a-c0c055b37ea9", 00:31:47.692 "strip_size_kb": 0, 00:31:47.692 "state": "online", 00:31:47.692 "raid_level": "raid1", 00:31:47.692 "superblock": true, 00:31:47.692 "num_base_bdevs": 2, 00:31:47.692 "num_base_bdevs_discovered": 2, 00:31:47.692 "num_base_bdevs_operational": 2, 00:31:47.692 "process": { 00:31:47.692 "type": "rebuild", 00:31:47.692 "target": "spare", 00:31:47.692 "progress": { 00:31:47.692 "blocks": 3072, 00:31:47.692 "percent": 38 00:31:47.692 } 00:31:47.692 }, 00:31:47.692 "base_bdevs_list": [ 00:31:47.692 { 00:31:47.692 "name": "spare", 00:31:47.692 "uuid": "f7313425-c5a9-57ce-8aff-8a8631f8a3fb", 00:31:47.692 "is_configured": true, 00:31:47.692 "data_offset": 256, 00:31:47.692 "data_size": 7936 00:31:47.692 }, 00:31:47.692 { 00:31:47.692 "name": "BaseBdev2", 00:31:47.692 "uuid": "93fb2531-8468-57f6-b6e6-9dfaf91b0b29", 00:31:47.692 "is_configured": true, 00:31:47.692 "data_offset": 256, 00:31:47.692 "data_size": 7936 00:31:47.692 } 00:31:47.692 ] 00:31:47.692 }' 00:31:47.692 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:47.692 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:47.692 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:47.692 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:47.692 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:31:47.692 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:31:47.692 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:31:47.692 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:31:47.692 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:31:47.692 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:31:47.692 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@705 -- # local timeout=1102 00:31:47.692 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:47.692 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:47.692 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:47.692 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:47.692 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:47.692 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:47.692 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:47.692 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:47.951 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:47.951 "name": "raid_bdev1", 00:31:47.951 "uuid": "0bbc8042-18a0-4856-a10a-c0c055b37ea9", 00:31:47.951 "strip_size_kb": 0, 00:31:47.951 "state": "online", 00:31:47.951 "raid_level": "raid1", 00:31:47.951 "superblock": true, 00:31:47.951 "num_base_bdevs": 2, 00:31:47.951 "num_base_bdevs_discovered": 2, 00:31:47.951 "num_base_bdevs_operational": 2, 00:31:47.951 "process": { 00:31:47.951 "type": "rebuild", 00:31:47.951 "target": "spare", 00:31:47.951 "progress": { 00:31:47.951 "blocks": 3840, 00:31:47.951 "percent": 48 00:31:47.951 } 00:31:47.951 }, 00:31:47.951 "base_bdevs_list": [ 00:31:47.951 { 00:31:47.951 "name": "spare", 00:31:47.951 "uuid": "f7313425-c5a9-57ce-8aff-8a8631f8a3fb", 00:31:47.951 "is_configured": true, 00:31:47.951 "data_offset": 256, 00:31:47.951 "data_size": 7936 00:31:47.951 }, 00:31:47.951 { 00:31:47.951 "name": "BaseBdev2", 00:31:47.951 "uuid": "93fb2531-8468-57f6-b6e6-9dfaf91b0b29", 00:31:47.951 "is_configured": true, 00:31:47.951 "data_offset": 256, 00:31:47.951 "data_size": 7936 00:31:47.951 } 00:31:47.951 ] 00:31:47.951 }' 00:31:47.951 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:47.951 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:47.951 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:48.211 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:48.211 04:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:49.148 04:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:49.148 04:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:49.148 04:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:49.148 04:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:49.148 04:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:49.148 04:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:49.148 04:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:49.148 04:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:49.408 04:26:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:49.408 "name": "raid_bdev1", 00:31:49.408 "uuid": "0bbc8042-18a0-4856-a10a-c0c055b37ea9", 00:31:49.408 "strip_size_kb": 0, 00:31:49.408 "state": "online", 00:31:49.408 "raid_level": "raid1", 00:31:49.408 "superblock": true, 00:31:49.408 "num_base_bdevs": 2, 00:31:49.408 "num_base_bdevs_discovered": 2, 00:31:49.408 "num_base_bdevs_operational": 2, 00:31:49.408 "process": { 00:31:49.408 "type": "rebuild", 00:31:49.408 "target": "spare", 00:31:49.408 "progress": { 00:31:49.408 "blocks": 7168, 00:31:49.408 "percent": 90 00:31:49.408 } 00:31:49.408 }, 00:31:49.408 "base_bdevs_list": [ 00:31:49.408 { 00:31:49.408 "name": "spare", 00:31:49.408 "uuid": "f7313425-c5a9-57ce-8aff-8a8631f8a3fb", 00:31:49.408 "is_configured": true, 00:31:49.408 "data_offset": 256, 00:31:49.408 "data_size": 7936 00:31:49.408 }, 00:31:49.408 { 00:31:49.408 "name": "BaseBdev2", 00:31:49.408 "uuid": "93fb2531-8468-57f6-b6e6-9dfaf91b0b29", 00:31:49.408 "is_configured": true, 00:31:49.408 "data_offset": 256, 00:31:49.408 "data_size": 7936 00:31:49.408 } 00:31:49.408 ] 00:31:49.408 }' 00:31:49.408 04:26:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:49.408 04:26:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:49.408 04:26:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:49.408 04:26:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:49.408 04:26:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:49.667 [2024-07-23 04:26:58.227598] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:49.667 [2024-07-23 04:26:58.227672] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:49.667 [2024-07-23 04:26:58.227773] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:50.604 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:50.604 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:50.604 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:50.604 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:50.604 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:50.604 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:50.604 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:50.604 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:50.604 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:50.604 "name": "raid_bdev1", 00:31:50.605 "uuid": "0bbc8042-18a0-4856-a10a-c0c055b37ea9", 00:31:50.605 "strip_size_kb": 0, 00:31:50.605 "state": "online", 00:31:50.605 "raid_level": "raid1", 00:31:50.605 "superblock": true, 00:31:50.605 "num_base_bdevs": 2, 00:31:50.605 "num_base_bdevs_discovered": 2, 00:31:50.605 "num_base_bdevs_operational": 2, 00:31:50.605 "base_bdevs_list": [ 00:31:50.605 { 00:31:50.605 "name": "spare", 00:31:50.605 "uuid": "f7313425-c5a9-57ce-8aff-8a8631f8a3fb", 00:31:50.605 "is_configured": true, 00:31:50.605 "data_offset": 256, 00:31:50.605 "data_size": 7936 00:31:50.605 }, 00:31:50.605 { 00:31:50.605 "name": "BaseBdev2", 00:31:50.605 "uuid": "93fb2531-8468-57f6-b6e6-9dfaf91b0b29", 00:31:50.605 "is_configured": true, 00:31:50.605 "data_offset": 256, 00:31:50.605 "data_size": 7936 00:31:50.605 } 00:31:50.605 ] 00:31:50.605 }' 00:31:50.605 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:50.605 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:50.605 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:50.864 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:31:50.864 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # break 00:31:50.864 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:50.864 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:50.864 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:50.864 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:50.864 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:50.864 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:50.864 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:50.864 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:50.864 "name": "raid_bdev1", 00:31:50.864 "uuid": "0bbc8042-18a0-4856-a10a-c0c055b37ea9", 00:31:50.864 "strip_size_kb": 0, 00:31:50.864 "state": "online", 00:31:50.864 "raid_level": "raid1", 00:31:50.864 "superblock": true, 00:31:50.864 "num_base_bdevs": 2, 00:31:50.864 "num_base_bdevs_discovered": 2, 00:31:50.864 "num_base_bdevs_operational": 2, 00:31:50.864 "base_bdevs_list": [ 00:31:50.864 { 00:31:50.864 "name": "spare", 00:31:50.864 "uuid": "f7313425-c5a9-57ce-8aff-8a8631f8a3fb", 00:31:50.864 "is_configured": true, 00:31:50.864 "data_offset": 256, 00:31:50.864 "data_size": 7936 00:31:50.864 }, 00:31:50.864 { 00:31:50.864 "name": "BaseBdev2", 00:31:50.864 "uuid": "93fb2531-8468-57f6-b6e6-9dfaf91b0b29", 00:31:50.864 "is_configured": true, 00:31:50.864 "data_offset": 256, 00:31:50.864 "data_size": 7936 00:31:50.864 } 00:31:50.864 ] 00:31:50.864 }' 00:31:51.123 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:51.123 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:51.123 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:51.123 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:51.123 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:51.123 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:51.123 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:51.123 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:51.123 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:51.123 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:51.123 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:51.123 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:51.123 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:51.123 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:51.123 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:51.123 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:51.382 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:51.382 "name": "raid_bdev1", 00:31:51.382 "uuid": "0bbc8042-18a0-4856-a10a-c0c055b37ea9", 00:31:51.382 "strip_size_kb": 0, 00:31:51.382 "state": "online", 00:31:51.382 "raid_level": "raid1", 00:31:51.382 "superblock": true, 00:31:51.382 "num_base_bdevs": 2, 00:31:51.382 "num_base_bdevs_discovered": 2, 00:31:51.382 "num_base_bdevs_operational": 2, 00:31:51.382 "base_bdevs_list": [ 00:31:51.382 { 00:31:51.382 "name": "spare", 00:31:51.382 "uuid": "f7313425-c5a9-57ce-8aff-8a8631f8a3fb", 00:31:51.382 "is_configured": true, 00:31:51.382 "data_offset": 256, 00:31:51.382 "data_size": 7936 00:31:51.382 }, 00:31:51.382 { 00:31:51.382 "name": "BaseBdev2", 00:31:51.382 "uuid": "93fb2531-8468-57f6-b6e6-9dfaf91b0b29", 00:31:51.382 "is_configured": true, 00:31:51.382 "data_offset": 256, 00:31:51.382 "data_size": 7936 00:31:51.382 } 00:31:51.382 ] 00:31:51.382 }' 00:31:51.382 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:51.382 04:26:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:51.949 04:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:52.208 [2024-07-23 04:27:00.755684] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:52.208 [2024-07-23 04:27:00.755721] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:52.208 [2024-07-23 04:27:00.755808] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:52.208 [2024-07-23 04:27:00.755888] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:52.208 [2024-07-23 04:27:00.755906] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:31:52.208 04:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:52.208 04:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # jq length 00:31:52.468 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:31:52.468 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:31:52.468 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:31:52.468 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:52.468 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:52.468 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:52.468 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:52.468 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:52.468 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:52.468 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:31:52.468 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:52.468 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:52.468 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:52.468 /dev/nbd0 00:31:52.728 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:52.728 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:52.728 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:52.728 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:31:52.728 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:52.728 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:52.728 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:52.728 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:31:52.728 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:52.728 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:52.728 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:52.728 1+0 records in 00:31:52.728 1+0 records out 00:31:52.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253753 s, 16.1 MB/s 00:31:52.728 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:52.728 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:31:52.728 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:52.728 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:52.728 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:31:52.728 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:52.728 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:52.728 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:31:52.728 /dev/nbd1 00:31:52.987 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:52.987 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:52.987 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:31:52.987 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:31:52.987 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:52.988 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:52.988 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:31:52.988 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:31:52.988 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:52.988 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:52.988 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:52.988 1+0 records in 00:31:52.988 1+0 records out 00:31:52.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344959 s, 11.9 MB/s 00:31:52.988 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:52.988 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:31:52.988 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:52.988 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:52.988 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:31:52.988 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:52.988 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:52.988 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:52.988 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:31:52.988 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:52.988 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:52.988 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:52.988 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:31:52.988 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:52.988 04:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:53.247 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:53.247 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:53.247 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:53.247 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:53.247 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:53.247 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:53.247 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:31:53.247 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:31:53.247 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:53.247 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:53.507 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:53.507 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:53.507 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:53.507 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:53.507 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:53.507 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:53.507 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:31:53.507 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:31:53.507 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:31:53.507 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:53.766 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:54.026 [2024-07-23 04:27:02.712952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:54.026 [2024-07-23 04:27:02.713013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:54.026 [2024-07-23 04:27:02.713044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043280 00:31:54.026 [2024-07-23 04:27:02.713060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:54.026 [2024-07-23 04:27:02.715895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:54.026 [2024-07-23 04:27:02.715930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:54.026 [2024-07-23 04:27:02.716042] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:54.026 [2024-07-23 04:27:02.716108] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:54.026 [2024-07-23 04:27:02.716325] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:54.026 spare 00:31:54.026 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:54.026 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:54.026 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:54.026 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:54.026 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:54.026 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:54.026 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:54.026 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:54.026 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:54.026 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:54.026 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:54.026 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:54.286 [2024-07-23 04:27:02.816675] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043880 00:31:54.286 [2024-07-23 04:27:02.816712] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:54.286 [2024-07-23 04:27:02.817072] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9390 00:31:54.286 [2024-07-23 04:27:02.817383] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043880 00:31:54.286 [2024-07-23 04:27:02.817400] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043880 00:31:54.286 [2024-07-23 04:27:02.817614] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:54.286 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:54.286 "name": "raid_bdev1", 00:31:54.286 "uuid": "0bbc8042-18a0-4856-a10a-c0c055b37ea9", 00:31:54.286 "strip_size_kb": 0, 00:31:54.286 "state": "online", 00:31:54.286 "raid_level": "raid1", 00:31:54.286 "superblock": true, 00:31:54.286 "num_base_bdevs": 2, 00:31:54.286 "num_base_bdevs_discovered": 2, 00:31:54.286 "num_base_bdevs_operational": 2, 00:31:54.286 "base_bdevs_list": [ 00:31:54.286 { 00:31:54.286 "name": "spare", 00:31:54.286 "uuid": "f7313425-c5a9-57ce-8aff-8a8631f8a3fb", 00:31:54.286 "is_configured": true, 00:31:54.286 "data_offset": 256, 00:31:54.286 "data_size": 7936 00:31:54.286 }, 00:31:54.286 { 00:31:54.286 "name": "BaseBdev2", 00:31:54.286 "uuid": "93fb2531-8468-57f6-b6e6-9dfaf91b0b29", 00:31:54.286 "is_configured": true, 00:31:54.286 "data_offset": 256, 00:31:54.286 "data_size": 7936 00:31:54.286 } 00:31:54.286 ] 00:31:54.286 }' 00:31:54.286 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:54.286 04:27:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:54.853 04:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:54.853 04:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:54.853 04:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:54.853 04:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:54.853 04:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:54.853 04:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:54.853 04:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:55.112 04:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:55.112 "name": "raid_bdev1", 00:31:55.112 "uuid": "0bbc8042-18a0-4856-a10a-c0c055b37ea9", 00:31:55.112 "strip_size_kb": 0, 00:31:55.112 "state": "online", 00:31:55.112 "raid_level": "raid1", 00:31:55.112 "superblock": true, 00:31:55.112 "num_base_bdevs": 2, 00:31:55.112 "num_base_bdevs_discovered": 2, 00:31:55.112 "num_base_bdevs_operational": 2, 00:31:55.112 "base_bdevs_list": [ 00:31:55.112 { 00:31:55.112 "name": "spare", 00:31:55.112 "uuid": "f7313425-c5a9-57ce-8aff-8a8631f8a3fb", 00:31:55.112 "is_configured": true, 00:31:55.112 "data_offset": 256, 00:31:55.112 "data_size": 7936 00:31:55.112 }, 00:31:55.112 { 00:31:55.112 "name": "BaseBdev2", 00:31:55.112 "uuid": "93fb2531-8468-57f6-b6e6-9dfaf91b0b29", 00:31:55.112 "is_configured": true, 00:31:55.112 "data_offset": 256, 00:31:55.112 "data_size": 7936 00:31:55.112 } 00:31:55.112 ] 00:31:55.112 }' 00:31:55.112 04:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:55.112 04:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:55.112 04:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:55.112 04:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:55.112 04:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:55.112 04:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:31:55.371 04:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:31:55.371 04:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:55.630 [2024-07-23 04:27:04.265573] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:55.630 04:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:55.631 04:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:55.631 04:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:55.631 04:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:55.631 04:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:55.631 04:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:55.631 04:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:55.631 04:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:55.631 04:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:55.631 04:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:55.631 04:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:55.631 04:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:55.890 04:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:55.890 "name": "raid_bdev1", 00:31:55.890 "uuid": "0bbc8042-18a0-4856-a10a-c0c055b37ea9", 00:31:55.890 "strip_size_kb": 0, 00:31:55.890 "state": "online", 00:31:55.890 "raid_level": "raid1", 00:31:55.890 "superblock": true, 00:31:55.890 "num_base_bdevs": 2, 00:31:55.890 "num_base_bdevs_discovered": 1, 00:31:55.890 "num_base_bdevs_operational": 1, 00:31:55.890 "base_bdevs_list": [ 00:31:55.890 { 00:31:55.890 "name": null, 00:31:55.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:55.890 "is_configured": false, 00:31:55.890 "data_offset": 256, 00:31:55.890 "data_size": 7936 00:31:55.890 }, 00:31:55.890 { 00:31:55.890 "name": "BaseBdev2", 00:31:55.890 "uuid": "93fb2531-8468-57f6-b6e6-9dfaf91b0b29", 00:31:55.890 "is_configured": true, 00:31:55.890 "data_offset": 256, 00:31:55.890 "data_size": 7936 00:31:55.890 } 00:31:55.890 ] 00:31:55.890 }' 00:31:55.890 04:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:55.890 04:27:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:56.457 04:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:56.716 [2024-07-23 04:27:05.284349] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:56.716 [2024-07-23 04:27:05.284554] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:56.717 [2024-07-23 04:27:05.284581] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:56.717 [2024-07-23 04:27:05.284618] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:56.717 [2024-07-23 04:27:05.309337] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9460 00:31:56.717 [2024-07-23 04:27:05.311552] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:56.717 04:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # sleep 1 00:31:57.654 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:57.654 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:57.654 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:57.654 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:57.654 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:57.654 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:57.654 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:57.913 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:57.913 "name": "raid_bdev1", 00:31:57.913 "uuid": "0bbc8042-18a0-4856-a10a-c0c055b37ea9", 00:31:57.913 "strip_size_kb": 0, 00:31:57.913 "state": "online", 00:31:57.913 "raid_level": "raid1", 00:31:57.913 "superblock": true, 00:31:57.913 "num_base_bdevs": 2, 00:31:57.913 "num_base_bdevs_discovered": 2, 00:31:57.913 "num_base_bdevs_operational": 2, 00:31:57.913 "process": { 00:31:57.913 "type": "rebuild", 00:31:57.913 "target": "spare", 00:31:57.913 "progress": { 00:31:57.913 "blocks": 3072, 00:31:57.913 "percent": 38 00:31:57.913 } 00:31:57.913 }, 00:31:57.913 "base_bdevs_list": [ 00:31:57.913 { 00:31:57.913 "name": "spare", 00:31:57.913 "uuid": "f7313425-c5a9-57ce-8aff-8a8631f8a3fb", 00:31:57.913 "is_configured": true, 00:31:57.913 "data_offset": 256, 00:31:57.913 "data_size": 7936 00:31:57.913 }, 00:31:57.913 { 00:31:57.913 "name": "BaseBdev2", 00:31:57.913 "uuid": "93fb2531-8468-57f6-b6e6-9dfaf91b0b29", 00:31:57.913 "is_configured": true, 00:31:57.913 "data_offset": 256, 00:31:57.913 "data_size": 7936 00:31:57.913 } 00:31:57.913 ] 00:31:57.913 }' 00:31:57.913 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:57.913 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:57.913 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:57.913 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:57.913 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:58.172 [2024-07-23 04:27:06.864543] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:58.172 [2024-07-23 04:27:06.924630] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:58.172 [2024-07-23 04:27:06.924697] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:58.172 [2024-07-23 04:27:06.924718] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:58.172 [2024-07-23 04:27:06.924734] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:58.431 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:58.431 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:58.431 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:58.431 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:58.431 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:58.431 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:58.431 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:58.431 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:58.431 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:58.431 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:58.431 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:58.431 04:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:58.690 04:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:58.690 "name": "raid_bdev1", 00:31:58.690 "uuid": "0bbc8042-18a0-4856-a10a-c0c055b37ea9", 00:31:58.690 "strip_size_kb": 0, 00:31:58.690 "state": "online", 00:31:58.690 "raid_level": "raid1", 00:31:58.690 "superblock": true, 00:31:58.690 "num_base_bdevs": 2, 00:31:58.690 "num_base_bdevs_discovered": 1, 00:31:58.690 "num_base_bdevs_operational": 1, 00:31:58.690 "base_bdevs_list": [ 00:31:58.690 { 00:31:58.690 "name": null, 00:31:58.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:58.690 "is_configured": false, 00:31:58.690 "data_offset": 256, 00:31:58.690 "data_size": 7936 00:31:58.690 }, 00:31:58.690 { 00:31:58.690 "name": "BaseBdev2", 00:31:58.690 "uuid": "93fb2531-8468-57f6-b6e6-9dfaf91b0b29", 00:31:58.690 "is_configured": true, 00:31:58.690 "data_offset": 256, 00:31:58.690 "data_size": 7936 00:31:58.690 } 00:31:58.690 ] 00:31:58.690 }' 00:31:58.690 04:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:58.690 04:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:59.258 04:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:59.258 [2024-07-23 04:27:07.953161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:59.258 [2024-07-23 04:27:07.953225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:59.258 [2024-07-23 04:27:07.953252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:31:59.258 [2024-07-23 04:27:07.953270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:59.258 [2024-07-23 04:27:07.953874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:59.258 [2024-07-23 04:27:07.953904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:59.258 [2024-07-23 04:27:07.954012] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:59.258 [2024-07-23 04:27:07.954033] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:59.258 [2024-07-23 04:27:07.954049] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:59.258 [2024-07-23 04:27:07.954085] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:59.258 [2024-07-23 04:27:07.977061] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9530 00:31:59.258 spare 00:31:59.258 [2024-07-23 04:27:07.979394] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:59.258 04:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # sleep 1 00:32:00.639 04:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:00.639 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:00.639 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:00.639 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:00.639 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:00.639 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:00.639 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:00.639 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:00.639 "name": "raid_bdev1", 00:32:00.639 "uuid": "0bbc8042-18a0-4856-a10a-c0c055b37ea9", 00:32:00.639 "strip_size_kb": 0, 00:32:00.639 "state": "online", 00:32:00.639 "raid_level": "raid1", 00:32:00.639 "superblock": true, 00:32:00.639 "num_base_bdevs": 2, 00:32:00.639 "num_base_bdevs_discovered": 2, 00:32:00.639 "num_base_bdevs_operational": 2, 00:32:00.639 "process": { 00:32:00.639 "type": "rebuild", 00:32:00.639 "target": "spare", 00:32:00.639 "progress": { 00:32:00.639 "blocks": 3072, 00:32:00.639 "percent": 38 00:32:00.639 } 00:32:00.639 }, 00:32:00.639 "base_bdevs_list": [ 00:32:00.639 { 00:32:00.639 "name": "spare", 00:32:00.639 "uuid": "f7313425-c5a9-57ce-8aff-8a8631f8a3fb", 00:32:00.639 "is_configured": true, 00:32:00.639 "data_offset": 256, 00:32:00.639 "data_size": 7936 00:32:00.639 }, 00:32:00.639 { 00:32:00.639 "name": "BaseBdev2", 00:32:00.639 "uuid": "93fb2531-8468-57f6-b6e6-9dfaf91b0b29", 00:32:00.639 "is_configured": true, 00:32:00.639 "data_offset": 256, 00:32:00.639 "data_size": 7936 00:32:00.639 } 00:32:00.639 ] 00:32:00.639 }' 00:32:00.639 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:00.639 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:00.639 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:00.639 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:00.639 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:00.970 [2024-07-23 04:27:09.528747] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:00.970 [2024-07-23 04:27:09.592359] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:00.970 [2024-07-23 04:27:09.592418] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:00.970 [2024-07-23 04:27:09.592443] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:00.970 [2024-07-23 04:27:09.592455] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:00.970 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:00.970 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:00.970 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:00.970 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:00.970 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:00.970 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:00.970 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:00.970 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:00.970 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:00.970 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:00.970 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:00.970 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:01.230 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:01.230 "name": "raid_bdev1", 00:32:01.230 "uuid": "0bbc8042-18a0-4856-a10a-c0c055b37ea9", 00:32:01.230 "strip_size_kb": 0, 00:32:01.230 "state": "online", 00:32:01.230 "raid_level": "raid1", 00:32:01.230 "superblock": true, 00:32:01.230 "num_base_bdevs": 2, 00:32:01.230 "num_base_bdevs_discovered": 1, 00:32:01.230 "num_base_bdevs_operational": 1, 00:32:01.230 "base_bdevs_list": [ 00:32:01.230 { 00:32:01.230 "name": null, 00:32:01.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:01.230 "is_configured": false, 00:32:01.230 "data_offset": 256, 00:32:01.230 "data_size": 7936 00:32:01.230 }, 00:32:01.230 { 00:32:01.230 "name": "BaseBdev2", 00:32:01.230 "uuid": "93fb2531-8468-57f6-b6e6-9dfaf91b0b29", 00:32:01.230 "is_configured": true, 00:32:01.230 "data_offset": 256, 00:32:01.230 "data_size": 7936 00:32:01.230 } 00:32:01.230 ] 00:32:01.230 }' 00:32:01.230 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:01.230 04:27:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:01.799 04:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:01.799 04:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:01.799 04:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:01.799 04:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:01.799 04:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:01.799 04:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:01.799 04:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:02.059 04:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:02.059 "name": "raid_bdev1", 00:32:02.059 "uuid": "0bbc8042-18a0-4856-a10a-c0c055b37ea9", 00:32:02.059 "strip_size_kb": 0, 00:32:02.059 "state": "online", 00:32:02.059 "raid_level": "raid1", 00:32:02.059 "superblock": true, 00:32:02.059 "num_base_bdevs": 2, 00:32:02.059 "num_base_bdevs_discovered": 1, 00:32:02.059 "num_base_bdevs_operational": 1, 00:32:02.059 "base_bdevs_list": [ 00:32:02.059 { 00:32:02.059 "name": null, 00:32:02.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:02.059 "is_configured": false, 00:32:02.059 "data_offset": 256, 00:32:02.059 "data_size": 7936 00:32:02.059 }, 00:32:02.059 { 00:32:02.059 "name": "BaseBdev2", 00:32:02.059 "uuid": "93fb2531-8468-57f6-b6e6-9dfaf91b0b29", 00:32:02.059 "is_configured": true, 00:32:02.059 "data_offset": 256, 00:32:02.059 "data_size": 7936 00:32:02.059 } 00:32:02.059 ] 00:32:02.059 }' 00:32:02.059 04:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:02.059 04:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:02.059 04:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:02.059 04:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:02.059 04:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:32:02.318 04:27:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:02.577 [2024-07-23 04:27:11.206253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:02.577 [2024-07-23 04:27:11.206311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:02.577 [2024-07-23 04:27:11.206340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044480 00:32:02.577 [2024-07-23 04:27:11.206355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:02.577 [2024-07-23 04:27:11.206925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:02.577 [2024-07-23 04:27:11.206950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:02.577 [2024-07-23 04:27:11.207045] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:02.577 [2024-07-23 04:27:11.207065] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:02.577 [2024-07-23 04:27:11.207081] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:02.577 BaseBdev1 00:32:02.577 04:27:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # sleep 1 00:32:03.515 04:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:03.515 04:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:03.515 04:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:03.515 04:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:03.515 04:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:03.515 04:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:03.515 04:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:03.515 04:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:03.515 04:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:03.515 04:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:03.515 04:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:03.515 04:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:03.774 04:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:03.774 "name": "raid_bdev1", 00:32:03.774 "uuid": "0bbc8042-18a0-4856-a10a-c0c055b37ea9", 00:32:03.774 "strip_size_kb": 0, 00:32:03.774 "state": "online", 00:32:03.774 "raid_level": "raid1", 00:32:03.774 "superblock": true, 00:32:03.774 "num_base_bdevs": 2, 00:32:03.774 "num_base_bdevs_discovered": 1, 00:32:03.774 "num_base_bdevs_operational": 1, 00:32:03.774 "base_bdevs_list": [ 00:32:03.774 { 00:32:03.774 "name": null, 00:32:03.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:03.774 "is_configured": false, 00:32:03.774 "data_offset": 256, 00:32:03.774 "data_size": 7936 00:32:03.774 }, 00:32:03.774 { 00:32:03.774 "name": "BaseBdev2", 00:32:03.774 "uuid": "93fb2531-8468-57f6-b6e6-9dfaf91b0b29", 00:32:03.774 "is_configured": true, 00:32:03.774 "data_offset": 256, 00:32:03.774 "data_size": 7936 00:32:03.774 } 00:32:03.774 ] 00:32:03.774 }' 00:32:03.774 04:27:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:03.774 04:27:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:04.343 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:04.343 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:04.343 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:04.343 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:04.343 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:04.343 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:04.343 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:04.603 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:04.603 "name": "raid_bdev1", 00:32:04.603 "uuid": "0bbc8042-18a0-4856-a10a-c0c055b37ea9", 00:32:04.603 "strip_size_kb": 0, 00:32:04.603 "state": "online", 00:32:04.603 "raid_level": "raid1", 00:32:04.603 "superblock": true, 00:32:04.603 "num_base_bdevs": 2, 00:32:04.603 "num_base_bdevs_discovered": 1, 00:32:04.603 "num_base_bdevs_operational": 1, 00:32:04.603 "base_bdevs_list": [ 00:32:04.603 { 00:32:04.603 "name": null, 00:32:04.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:04.603 "is_configured": false, 00:32:04.603 "data_offset": 256, 00:32:04.603 "data_size": 7936 00:32:04.603 }, 00:32:04.603 { 00:32:04.603 "name": "BaseBdev2", 00:32:04.603 "uuid": "93fb2531-8468-57f6-b6e6-9dfaf91b0b29", 00:32:04.603 "is_configured": true, 00:32:04.603 "data_offset": 256, 00:32:04.603 "data_size": 7936 00:32:04.603 } 00:32:04.603 ] 00:32:04.603 }' 00:32:04.603 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:04.603 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:04.603 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:04.603 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:04.603 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:04.603 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@648 -- # local es=0 00:32:04.603 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:04.603 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:04.603 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:04.603 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:04.603 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:04.603 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:04.603 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:04.603 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:04.603 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:32:04.603 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:04.861 [2024-07-23 04:27:13.584730] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:04.861 [2024-07-23 04:27:13.584902] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:04.861 [2024-07-23 04:27:13.584922] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:04.861 request: 00:32:04.861 { 00:32:04.861 "base_bdev": "BaseBdev1", 00:32:04.861 "raid_bdev": "raid_bdev1", 00:32:04.861 "method": "bdev_raid_add_base_bdev", 00:32:04.861 "req_id": 1 00:32:04.861 } 00:32:04.861 Got JSON-RPC error response 00:32:04.861 response: 00:32:04.861 { 00:32:04.861 "code": -22, 00:32:04.861 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:32:04.861 } 00:32:04.861 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # es=1 00:32:04.861 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:04.861 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:04.861 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:04.861 04:27:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # sleep 1 00:32:06.238 04:27:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:06.238 04:27:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:06.238 04:27:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:06.238 04:27:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:06.238 04:27:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:06.238 04:27:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:06.238 04:27:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:06.238 04:27:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:06.238 04:27:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:06.238 04:27:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:06.238 04:27:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:06.238 04:27:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:06.238 04:27:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:06.238 "name": "raid_bdev1", 00:32:06.238 "uuid": "0bbc8042-18a0-4856-a10a-c0c055b37ea9", 00:32:06.238 "strip_size_kb": 0, 00:32:06.238 "state": "online", 00:32:06.238 "raid_level": "raid1", 00:32:06.238 "superblock": true, 00:32:06.238 "num_base_bdevs": 2, 00:32:06.238 "num_base_bdevs_discovered": 1, 00:32:06.238 "num_base_bdevs_operational": 1, 00:32:06.238 "base_bdevs_list": [ 00:32:06.238 { 00:32:06.238 "name": null, 00:32:06.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:06.238 "is_configured": false, 00:32:06.238 "data_offset": 256, 00:32:06.238 "data_size": 7936 00:32:06.238 }, 00:32:06.238 { 00:32:06.238 "name": "BaseBdev2", 00:32:06.238 "uuid": "93fb2531-8468-57f6-b6e6-9dfaf91b0b29", 00:32:06.238 "is_configured": true, 00:32:06.238 "data_offset": 256, 00:32:06.238 "data_size": 7936 00:32:06.238 } 00:32:06.238 ] 00:32:06.238 }' 00:32:06.238 04:27:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:06.238 04:27:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:06.806 04:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:06.806 04:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:06.806 04:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:06.806 04:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:06.806 04:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:06.806 04:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:06.806 04:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:07.066 04:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:07.066 "name": "raid_bdev1", 00:32:07.066 "uuid": "0bbc8042-18a0-4856-a10a-c0c055b37ea9", 00:32:07.066 "strip_size_kb": 0, 00:32:07.066 "state": "online", 00:32:07.066 "raid_level": "raid1", 00:32:07.066 "superblock": true, 00:32:07.066 "num_base_bdevs": 2, 00:32:07.066 "num_base_bdevs_discovered": 1, 00:32:07.066 "num_base_bdevs_operational": 1, 00:32:07.066 "base_bdevs_list": [ 00:32:07.066 { 00:32:07.066 "name": null, 00:32:07.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:07.066 "is_configured": false, 00:32:07.066 "data_offset": 256, 00:32:07.066 "data_size": 7936 00:32:07.066 }, 00:32:07.066 { 00:32:07.066 "name": "BaseBdev2", 00:32:07.066 "uuid": "93fb2531-8468-57f6-b6e6-9dfaf91b0b29", 00:32:07.066 "is_configured": true, 00:32:07.066 "data_offset": 256, 00:32:07.066 "data_size": 7936 00:32:07.066 } 00:32:07.066 ] 00:32:07.066 }' 00:32:07.066 04:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:07.066 04:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:07.066 04:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:07.066 04:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:07.066 04:27:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # killprocess 2807919 00:32:07.066 04:27:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 2807919 ']' 00:32:07.066 04:27:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 2807919 00:32:07.066 04:27:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:32:07.066 04:27:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:07.066 04:27:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2807919 00:32:07.066 04:27:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:07.066 04:27:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:07.066 04:27:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2807919' 00:32:07.066 killing process with pid 2807919 00:32:07.066 04:27:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@967 -- # kill 2807919 00:32:07.066 Received shutdown signal, test time was about 60.000000 seconds 00:32:07.066 00:32:07.066 Latency(us) 00:32:07.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.066 =================================================================================================================== 00:32:07.066 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:07.066 [2024-07-23 04:27:15.787594] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:07.066 04:27:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # wait 2807919 00:32:07.066 [2024-07-23 04:27:15.787725] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:07.066 [2024-07-23 04:27:15.787789] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:07.066 [2024-07-23 04:27:15.787806] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043880 name raid_bdev1, state offline 00:32:07.325 [2024-07-23 04:27:16.106242] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:09.233 04:27:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # return 0 00:32:09.233 00:32:09.233 real 0m32.606s 00:32:09.233 user 0m48.827s 00:32:09.233 sys 0m5.090s 00:32:09.233 04:27:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:09.233 04:27:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:09.233 ************************************ 00:32:09.233 END TEST raid_rebuild_test_sb_4k 00:32:09.233 ************************************ 00:32:09.233 04:27:17 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:32:09.233 04:27:17 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:32:09.233 04:27:17 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:32:09.233 04:27:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:32:09.233 04:27:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:09.233 04:27:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:09.233 ************************************ 00:32:09.233 START TEST raid_state_function_test_sb_md_separate 00:32:09.233 ************************************ 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=2814210 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2814210' 00:32:09.233 Process raid pid: 2814210 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 2814210 /var/tmp/spdk-raid.sock 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 2814210 ']' 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:09.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:09.233 04:27:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:09.233 [2024-07-23 04:27:17.953026] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:09.233 [2024-07-23 04:27:17.953149] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:09.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.493 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:09.493 [2024-07-23 04:27:18.180347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.752 [2024-07-23 04:27:18.476704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.320 [2024-07-23 04:27:18.830338] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:10.320 [2024-07-23 04:27:18.830372] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:10.320 04:27:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:10.320 04:27:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:32:10.320 04:27:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:32:10.579 [2024-07-23 04:27:19.233112] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:10.579 [2024-07-23 04:27:19.233171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:10.579 [2024-07-23 04:27:19.233187] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:10.579 [2024-07-23 04:27:19.233203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:10.579 04:27:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:10.579 04:27:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:10.579 04:27:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:10.579 04:27:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:10.579 04:27:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:10.579 04:27:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:10.579 04:27:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:10.579 04:27:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:10.579 04:27:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:10.579 04:27:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:10.580 04:27:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:10.580 04:27:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:10.839 04:27:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:10.839 "name": "Existed_Raid", 00:32:10.839 "uuid": "758dd5bf-a1d6-4924-8e5c-169f4985dd9f", 00:32:10.839 "strip_size_kb": 0, 00:32:10.839 "state": "configuring", 00:32:10.839 "raid_level": "raid1", 00:32:10.839 "superblock": true, 00:32:10.839 "num_base_bdevs": 2, 00:32:10.839 "num_base_bdevs_discovered": 0, 00:32:10.839 "num_base_bdevs_operational": 2, 00:32:10.839 "base_bdevs_list": [ 00:32:10.839 { 00:32:10.839 "name": "BaseBdev1", 00:32:10.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:10.839 "is_configured": false, 00:32:10.839 "data_offset": 0, 00:32:10.839 "data_size": 0 00:32:10.839 }, 00:32:10.839 { 00:32:10.839 "name": "BaseBdev2", 00:32:10.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:10.839 "is_configured": false, 00:32:10.839 "data_offset": 0, 00:32:10.839 "data_size": 0 00:32:10.839 } 00:32:10.839 ] 00:32:10.839 }' 00:32:10.839 04:27:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:10.839 04:27:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:11.407 04:27:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:11.666 [2024-07-23 04:27:20.191538] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:11.666 [2024-07-23 04:27:20.191581] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:32:11.666 04:27:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:32:11.666 [2024-07-23 04:27:20.348002] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:11.666 [2024-07-23 04:27:20.348044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:11.666 [2024-07-23 04:27:20.348058] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:11.666 [2024-07-23 04:27:20.348074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:11.666 04:27:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:32:11.925 [2024-07-23 04:27:20.629083] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:11.925 BaseBdev1 00:32:11.925 04:27:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:32:11.925 04:27:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:32:11.925 04:27:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:11.925 04:27:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:32:11.925 04:27:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:11.925 04:27:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:11.925 04:27:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:12.184 04:27:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:12.443 [ 00:32:12.443 { 00:32:12.443 "name": "BaseBdev1", 00:32:12.443 "aliases": [ 00:32:12.443 "354ed864-d3d5-48d8-a706-1e67ed0eabd5" 00:32:12.443 ], 00:32:12.443 "product_name": "Malloc disk", 00:32:12.443 "block_size": 4096, 00:32:12.443 "num_blocks": 8192, 00:32:12.443 "uuid": "354ed864-d3d5-48d8-a706-1e67ed0eabd5", 00:32:12.443 "md_size": 32, 00:32:12.443 "md_interleave": false, 00:32:12.443 "dif_type": 0, 00:32:12.443 "assigned_rate_limits": { 00:32:12.443 "rw_ios_per_sec": 0, 00:32:12.443 "rw_mbytes_per_sec": 0, 00:32:12.443 "r_mbytes_per_sec": 0, 00:32:12.443 "w_mbytes_per_sec": 0 00:32:12.443 }, 00:32:12.443 "claimed": true, 00:32:12.443 "claim_type": "exclusive_write", 00:32:12.443 "zoned": false, 00:32:12.443 "supported_io_types": { 00:32:12.443 "read": true, 00:32:12.443 "write": true, 00:32:12.443 "unmap": true, 00:32:12.443 "flush": true, 00:32:12.443 "reset": true, 00:32:12.443 "nvme_admin": false, 00:32:12.443 "nvme_io": false, 00:32:12.443 "nvme_io_md": false, 00:32:12.443 "write_zeroes": true, 00:32:12.443 "zcopy": true, 00:32:12.443 "get_zone_info": false, 00:32:12.443 "zone_management": false, 00:32:12.443 "zone_append": false, 00:32:12.443 "compare": false, 00:32:12.443 "compare_and_write": false, 00:32:12.443 "abort": true, 00:32:12.443 "seek_hole": false, 00:32:12.443 "seek_data": false, 00:32:12.443 "copy": true, 00:32:12.443 "nvme_iov_md": false 00:32:12.443 }, 00:32:12.443 "memory_domains": [ 00:32:12.443 { 00:32:12.443 "dma_device_id": "system", 00:32:12.443 "dma_device_type": 1 00:32:12.443 }, 00:32:12.443 { 00:32:12.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:12.443 "dma_device_type": 2 00:32:12.443 } 00:32:12.443 ], 00:32:12.443 "driver_specific": {} 00:32:12.443 } 00:32:12.443 ] 00:32:12.443 04:27:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:32:12.443 04:27:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:12.443 04:27:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:12.443 04:27:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:12.443 04:27:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:12.443 04:27:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:12.443 04:27:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:12.443 04:27:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:12.443 04:27:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:12.443 04:27:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:12.443 04:27:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:12.443 04:27:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:12.444 04:27:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:12.703 04:27:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:12.703 "name": "Existed_Raid", 00:32:12.703 "uuid": "a0e4511a-93f6-46ba-bc60-de7cca8ebb98", 00:32:12.703 "strip_size_kb": 0, 00:32:12.703 "state": "configuring", 00:32:12.703 "raid_level": "raid1", 00:32:12.703 "superblock": true, 00:32:12.703 "num_base_bdevs": 2, 00:32:12.703 "num_base_bdevs_discovered": 1, 00:32:12.703 "num_base_bdevs_operational": 2, 00:32:12.703 "base_bdevs_list": [ 00:32:12.703 { 00:32:12.703 "name": "BaseBdev1", 00:32:12.703 "uuid": "354ed864-d3d5-48d8-a706-1e67ed0eabd5", 00:32:12.703 "is_configured": true, 00:32:12.703 "data_offset": 256, 00:32:12.703 "data_size": 7936 00:32:12.703 }, 00:32:12.703 { 00:32:12.703 "name": "BaseBdev2", 00:32:12.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:12.703 "is_configured": false, 00:32:12.703 "data_offset": 0, 00:32:12.703 "data_size": 0 00:32:12.703 } 00:32:12.703 ] 00:32:12.703 }' 00:32:12.703 04:27:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:12.703 04:27:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:13.272 04:27:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:13.531 [2024-07-23 04:27:22.085262] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:13.531 [2024-07-23 04:27:22.085314] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:32:13.531 04:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:32:13.531 [2024-07-23 04:27:22.313893] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:13.790 [2024-07-23 04:27:22.316202] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:13.790 [2024-07-23 04:27:22.316245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:13.790 04:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:32:13.790 04:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:13.790 04:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:13.790 04:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:13.790 04:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:13.790 04:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:13.790 04:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:13.790 04:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:13.791 04:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:13.791 04:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:13.791 04:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:13.791 04:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:13.791 04:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:13.791 04:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:13.791 04:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:13.791 "name": "Existed_Raid", 00:32:13.791 "uuid": "648d8630-83ab-4f92-b7c1-914a1deb91be", 00:32:13.791 "strip_size_kb": 0, 00:32:13.791 "state": "configuring", 00:32:13.791 "raid_level": "raid1", 00:32:13.791 "superblock": true, 00:32:13.791 "num_base_bdevs": 2, 00:32:13.791 "num_base_bdevs_discovered": 1, 00:32:13.791 "num_base_bdevs_operational": 2, 00:32:13.791 "base_bdevs_list": [ 00:32:13.791 { 00:32:13.791 "name": "BaseBdev1", 00:32:13.791 "uuid": "354ed864-d3d5-48d8-a706-1e67ed0eabd5", 00:32:13.791 "is_configured": true, 00:32:13.791 "data_offset": 256, 00:32:13.791 "data_size": 7936 00:32:13.791 }, 00:32:13.791 { 00:32:13.791 "name": "BaseBdev2", 00:32:13.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:13.791 "is_configured": false, 00:32:13.791 "data_offset": 0, 00:32:13.791 "data_size": 0 00:32:13.791 } 00:32:13.791 ] 00:32:13.791 }' 00:32:13.791 04:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:13.791 04:27:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:14.358 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:32:14.617 [2024-07-23 04:27:23.376162] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:14.617 [2024-07-23 04:27:23.376429] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:32:14.617 [2024-07-23 04:27:23.376449] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:14.618 [2024-07-23 04:27:23.376545] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:32:14.618 [2024-07-23 04:27:23.376745] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:32:14.618 [2024-07-23 04:27:23.376763] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:32:14.618 [2024-07-23 04:27:23.376910] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:14.618 BaseBdev2 00:32:14.618 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:32:14.618 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:32:14.618 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:14.618 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:32:14.618 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:14.618 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:14.618 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:14.876 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:15.136 [ 00:32:15.136 { 00:32:15.136 "name": "BaseBdev2", 00:32:15.136 "aliases": [ 00:32:15.136 "2d12f08a-c438-45aa-9990-55c07f9ff318" 00:32:15.136 ], 00:32:15.136 "product_name": "Malloc disk", 00:32:15.136 "block_size": 4096, 00:32:15.136 "num_blocks": 8192, 00:32:15.136 "uuid": "2d12f08a-c438-45aa-9990-55c07f9ff318", 00:32:15.136 "md_size": 32, 00:32:15.136 "md_interleave": false, 00:32:15.136 "dif_type": 0, 00:32:15.136 "assigned_rate_limits": { 00:32:15.136 "rw_ios_per_sec": 0, 00:32:15.136 "rw_mbytes_per_sec": 0, 00:32:15.136 "r_mbytes_per_sec": 0, 00:32:15.136 "w_mbytes_per_sec": 0 00:32:15.136 }, 00:32:15.136 "claimed": true, 00:32:15.136 "claim_type": "exclusive_write", 00:32:15.136 "zoned": false, 00:32:15.136 "supported_io_types": { 00:32:15.136 "read": true, 00:32:15.136 "write": true, 00:32:15.136 "unmap": true, 00:32:15.136 "flush": true, 00:32:15.136 "reset": true, 00:32:15.136 "nvme_admin": false, 00:32:15.136 "nvme_io": false, 00:32:15.136 "nvme_io_md": false, 00:32:15.136 "write_zeroes": true, 00:32:15.136 "zcopy": true, 00:32:15.136 "get_zone_info": false, 00:32:15.136 "zone_management": false, 00:32:15.136 "zone_append": false, 00:32:15.136 "compare": false, 00:32:15.136 "compare_and_write": false, 00:32:15.136 "abort": true, 00:32:15.136 "seek_hole": false, 00:32:15.136 "seek_data": false, 00:32:15.136 "copy": true, 00:32:15.136 "nvme_iov_md": false 00:32:15.136 }, 00:32:15.136 "memory_domains": [ 00:32:15.136 { 00:32:15.136 "dma_device_id": "system", 00:32:15.136 "dma_device_type": 1 00:32:15.136 }, 00:32:15.136 { 00:32:15.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:15.136 "dma_device_type": 2 00:32:15.136 } 00:32:15.136 ], 00:32:15.136 "driver_specific": {} 00:32:15.136 } 00:32:15.136 ] 00:32:15.136 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:32:15.136 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:15.136 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:15.136 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:32:15.136 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:15.137 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:15.137 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:15.137 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:15.137 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:15.137 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:15.137 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:15.137 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:15.137 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:15.137 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:15.137 04:27:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:15.396 04:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:15.396 "name": "Existed_Raid", 00:32:15.396 "uuid": "648d8630-83ab-4f92-b7c1-914a1deb91be", 00:32:15.396 "strip_size_kb": 0, 00:32:15.396 "state": "online", 00:32:15.396 "raid_level": "raid1", 00:32:15.396 "superblock": true, 00:32:15.396 "num_base_bdevs": 2, 00:32:15.396 "num_base_bdevs_discovered": 2, 00:32:15.396 "num_base_bdevs_operational": 2, 00:32:15.396 "base_bdevs_list": [ 00:32:15.396 { 00:32:15.396 "name": "BaseBdev1", 00:32:15.396 "uuid": "354ed864-d3d5-48d8-a706-1e67ed0eabd5", 00:32:15.396 "is_configured": true, 00:32:15.396 "data_offset": 256, 00:32:15.396 "data_size": 7936 00:32:15.396 }, 00:32:15.396 { 00:32:15.396 "name": "BaseBdev2", 00:32:15.396 "uuid": "2d12f08a-c438-45aa-9990-55c07f9ff318", 00:32:15.396 "is_configured": true, 00:32:15.396 "data_offset": 256, 00:32:15.396 "data_size": 7936 00:32:15.396 } 00:32:15.396 ] 00:32:15.396 }' 00:32:15.396 04:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:15.396 04:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:16.031 04:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:32:16.031 04:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:32:16.031 04:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:16.031 04:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:16.031 04:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:16.031 04:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:32:16.031 04:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:16.031 04:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:16.031 [2024-07-23 04:27:24.792519] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:16.031 04:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:16.031 "name": "Existed_Raid", 00:32:16.031 "aliases": [ 00:32:16.031 "648d8630-83ab-4f92-b7c1-914a1deb91be" 00:32:16.031 ], 00:32:16.031 "product_name": "Raid Volume", 00:32:16.031 "block_size": 4096, 00:32:16.031 "num_blocks": 7936, 00:32:16.031 "uuid": "648d8630-83ab-4f92-b7c1-914a1deb91be", 00:32:16.031 "md_size": 32, 00:32:16.031 "md_interleave": false, 00:32:16.031 "dif_type": 0, 00:32:16.031 "assigned_rate_limits": { 00:32:16.031 "rw_ios_per_sec": 0, 00:32:16.031 "rw_mbytes_per_sec": 0, 00:32:16.031 "r_mbytes_per_sec": 0, 00:32:16.031 "w_mbytes_per_sec": 0 00:32:16.031 }, 00:32:16.031 "claimed": false, 00:32:16.031 "zoned": false, 00:32:16.031 "supported_io_types": { 00:32:16.031 "read": true, 00:32:16.031 "write": true, 00:32:16.031 "unmap": false, 00:32:16.031 "flush": false, 00:32:16.031 "reset": true, 00:32:16.031 "nvme_admin": false, 00:32:16.031 "nvme_io": false, 00:32:16.031 "nvme_io_md": false, 00:32:16.031 "write_zeroes": true, 00:32:16.031 "zcopy": false, 00:32:16.031 "get_zone_info": false, 00:32:16.031 "zone_management": false, 00:32:16.031 "zone_append": false, 00:32:16.031 "compare": false, 00:32:16.031 "compare_and_write": false, 00:32:16.031 "abort": false, 00:32:16.031 "seek_hole": false, 00:32:16.031 "seek_data": false, 00:32:16.031 "copy": false, 00:32:16.031 "nvme_iov_md": false 00:32:16.031 }, 00:32:16.031 "memory_domains": [ 00:32:16.031 { 00:32:16.031 "dma_device_id": "system", 00:32:16.031 "dma_device_type": 1 00:32:16.031 }, 00:32:16.031 { 00:32:16.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:16.031 "dma_device_type": 2 00:32:16.031 }, 00:32:16.031 { 00:32:16.031 "dma_device_id": "system", 00:32:16.031 "dma_device_type": 1 00:32:16.031 }, 00:32:16.031 { 00:32:16.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:16.031 "dma_device_type": 2 00:32:16.031 } 00:32:16.031 ], 00:32:16.031 "driver_specific": { 00:32:16.031 "raid": { 00:32:16.031 "uuid": "648d8630-83ab-4f92-b7c1-914a1deb91be", 00:32:16.031 "strip_size_kb": 0, 00:32:16.031 "state": "online", 00:32:16.031 "raid_level": "raid1", 00:32:16.031 "superblock": true, 00:32:16.031 "num_base_bdevs": 2, 00:32:16.031 "num_base_bdevs_discovered": 2, 00:32:16.031 "num_base_bdevs_operational": 2, 00:32:16.031 "base_bdevs_list": [ 00:32:16.031 { 00:32:16.031 "name": "BaseBdev1", 00:32:16.031 "uuid": "354ed864-d3d5-48d8-a706-1e67ed0eabd5", 00:32:16.031 "is_configured": true, 00:32:16.031 "data_offset": 256, 00:32:16.031 "data_size": 7936 00:32:16.031 }, 00:32:16.031 { 00:32:16.031 "name": "BaseBdev2", 00:32:16.031 "uuid": "2d12f08a-c438-45aa-9990-55c07f9ff318", 00:32:16.031 "is_configured": true, 00:32:16.031 "data_offset": 256, 00:32:16.031 "data_size": 7936 00:32:16.031 } 00:32:16.031 ] 00:32:16.031 } 00:32:16.031 } 00:32:16.031 }' 00:32:16.031 04:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:16.290 04:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:32:16.290 BaseBdev2' 00:32:16.290 04:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:16.290 04:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:32:16.290 04:27:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:16.859 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:16.859 "name": "BaseBdev1", 00:32:16.859 "aliases": [ 00:32:16.859 "354ed864-d3d5-48d8-a706-1e67ed0eabd5" 00:32:16.859 ], 00:32:16.859 "product_name": "Malloc disk", 00:32:16.859 "block_size": 4096, 00:32:16.859 "num_blocks": 8192, 00:32:16.859 "uuid": "354ed864-d3d5-48d8-a706-1e67ed0eabd5", 00:32:16.859 "md_size": 32, 00:32:16.859 "md_interleave": false, 00:32:16.859 "dif_type": 0, 00:32:16.859 "assigned_rate_limits": { 00:32:16.859 "rw_ios_per_sec": 0, 00:32:16.859 "rw_mbytes_per_sec": 0, 00:32:16.859 "r_mbytes_per_sec": 0, 00:32:16.859 "w_mbytes_per_sec": 0 00:32:16.859 }, 00:32:16.859 "claimed": true, 00:32:16.859 "claim_type": "exclusive_write", 00:32:16.859 "zoned": false, 00:32:16.859 "supported_io_types": { 00:32:16.859 "read": true, 00:32:16.859 "write": true, 00:32:16.859 "unmap": true, 00:32:16.859 "flush": true, 00:32:16.859 "reset": true, 00:32:16.859 "nvme_admin": false, 00:32:16.859 "nvme_io": false, 00:32:16.859 "nvme_io_md": false, 00:32:16.859 "write_zeroes": true, 00:32:16.859 "zcopy": true, 00:32:16.859 "get_zone_info": false, 00:32:16.859 "zone_management": false, 00:32:16.859 "zone_append": false, 00:32:16.859 "compare": false, 00:32:16.859 "compare_and_write": false, 00:32:16.859 "abort": true, 00:32:16.859 "seek_hole": false, 00:32:16.859 "seek_data": false, 00:32:16.859 "copy": true, 00:32:16.859 "nvme_iov_md": false 00:32:16.859 }, 00:32:16.859 "memory_domains": [ 00:32:16.859 { 00:32:16.859 "dma_device_id": "system", 00:32:16.859 "dma_device_type": 1 00:32:16.859 }, 00:32:16.859 { 00:32:16.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:16.859 "dma_device_type": 2 00:32:16.859 } 00:32:16.859 ], 00:32:16.859 "driver_specific": {} 00:32:16.859 }' 00:32:16.859 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:16.859 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:16.859 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:16.859 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:16.859 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:16.859 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:16.859 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:16.859 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:16.859 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:16.859 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:16.859 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:16.859 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:16.859 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:16.859 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:16.859 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:17.118 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:17.118 "name": "BaseBdev2", 00:32:17.118 "aliases": [ 00:32:17.118 "2d12f08a-c438-45aa-9990-55c07f9ff318" 00:32:17.118 ], 00:32:17.118 "product_name": "Malloc disk", 00:32:17.118 "block_size": 4096, 00:32:17.118 "num_blocks": 8192, 00:32:17.118 "uuid": "2d12f08a-c438-45aa-9990-55c07f9ff318", 00:32:17.118 "md_size": 32, 00:32:17.118 "md_interleave": false, 00:32:17.118 "dif_type": 0, 00:32:17.118 "assigned_rate_limits": { 00:32:17.118 "rw_ios_per_sec": 0, 00:32:17.118 "rw_mbytes_per_sec": 0, 00:32:17.118 "r_mbytes_per_sec": 0, 00:32:17.118 "w_mbytes_per_sec": 0 00:32:17.118 }, 00:32:17.118 "claimed": true, 00:32:17.118 "claim_type": "exclusive_write", 00:32:17.118 "zoned": false, 00:32:17.118 "supported_io_types": { 00:32:17.118 "read": true, 00:32:17.118 "write": true, 00:32:17.118 "unmap": true, 00:32:17.118 "flush": true, 00:32:17.118 "reset": true, 00:32:17.118 "nvme_admin": false, 00:32:17.118 "nvme_io": false, 00:32:17.118 "nvme_io_md": false, 00:32:17.118 "write_zeroes": true, 00:32:17.118 "zcopy": true, 00:32:17.118 "get_zone_info": false, 00:32:17.118 "zone_management": false, 00:32:17.118 "zone_append": false, 00:32:17.118 "compare": false, 00:32:17.118 "compare_and_write": false, 00:32:17.118 "abort": true, 00:32:17.118 "seek_hole": false, 00:32:17.118 "seek_data": false, 00:32:17.118 "copy": true, 00:32:17.118 "nvme_iov_md": false 00:32:17.118 }, 00:32:17.118 "memory_domains": [ 00:32:17.118 { 00:32:17.118 "dma_device_id": "system", 00:32:17.118 "dma_device_type": 1 00:32:17.118 }, 00:32:17.118 { 00:32:17.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:17.118 "dma_device_type": 2 00:32:17.118 } 00:32:17.118 ], 00:32:17.118 "driver_specific": {} 00:32:17.118 }' 00:32:17.118 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:17.118 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:17.377 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:17.377 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:17.377 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:17.377 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:17.377 04:27:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:17.377 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:17.377 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:17.377 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:17.377 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:17.377 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:17.377 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:17.946 [2024-07-23 04:27:26.589094] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:18.205 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:32:18.205 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:32:18.205 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:18.205 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:32:18.205 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:32:18.205 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:32:18.205 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:18.205 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:18.205 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:18.205 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:18.205 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:18.205 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:18.205 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:18.205 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:18.205 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:18.205 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:18.205 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:18.463 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:18.463 "name": "Existed_Raid", 00:32:18.463 "uuid": "648d8630-83ab-4f92-b7c1-914a1deb91be", 00:32:18.463 "strip_size_kb": 0, 00:32:18.463 "state": "online", 00:32:18.463 "raid_level": "raid1", 00:32:18.463 "superblock": true, 00:32:18.463 "num_base_bdevs": 2, 00:32:18.463 "num_base_bdevs_discovered": 1, 00:32:18.463 "num_base_bdevs_operational": 1, 00:32:18.463 "base_bdevs_list": [ 00:32:18.463 { 00:32:18.463 "name": null, 00:32:18.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:18.463 "is_configured": false, 00:32:18.463 "data_offset": 256, 00:32:18.463 "data_size": 7936 00:32:18.463 }, 00:32:18.463 { 00:32:18.463 "name": "BaseBdev2", 00:32:18.463 "uuid": "2d12f08a-c438-45aa-9990-55c07f9ff318", 00:32:18.463 "is_configured": true, 00:32:18.463 "data_offset": 256, 00:32:18.463 "data_size": 7936 00:32:18.463 } 00:32:18.463 ] 00:32:18.463 }' 00:32:18.463 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:18.463 04:27:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:18.722 04:27:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:32:18.722 04:27:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:18.722 04:27:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:18.722 04:27:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:18.981 04:27:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:18.981 04:27:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:18.981 04:27:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:32:19.240 [2024-07-23 04:27:27.846295] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:19.240 [2024-07-23 04:27:27.846413] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:19.240 [2024-07-23 04:27:27.992569] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:19.240 [2024-07-23 04:27:27.992622] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:19.240 [2024-07-23 04:27:27.992640] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:32:19.240 04:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:19.240 04:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:19.240 04:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:19.240 04:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:32:19.499 04:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:32:19.499 04:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:32:19.499 04:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:32:19.499 04:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 2814210 00:32:19.499 04:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 2814210 ']' 00:32:19.499 04:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 2814210 00:32:19.499 04:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:32:19.499 04:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:19.499 04:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2814210 00:32:19.499 04:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:19.499 04:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:19.499 04:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2814210' 00:32:19.499 killing process with pid 2814210 00:32:19.499 04:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 2814210 00:32:19.499 [2024-07-23 04:27:28.228843] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:19.499 04:27:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 2814210 00:32:19.499 [2024-07-23 04:27:28.252913] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:21.401 04:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:32:21.401 00:32:21.401 real 0m12.144s 00:32:21.401 user 0m19.722s 00:32:21.401 sys 0m1.970s 00:32:21.401 04:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:21.401 04:27:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:21.401 ************************************ 00:32:21.401 END TEST raid_state_function_test_sb_md_separate 00:32:21.401 ************************************ 00:32:21.401 04:27:30 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:32:21.401 04:27:30 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:32:21.401 04:27:30 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:32:21.401 04:27:30 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:21.401 04:27:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:21.401 ************************************ 00:32:21.401 START TEST raid_superblock_test_md_separate 00:32:21.401 ************************************ 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=2816531 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 2816531 /var/tmp/spdk-raid.sock 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@829 -- # '[' -z 2816531 ']' 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:21.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:21.401 04:27:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:21.401 [2024-07-23 04:27:30.135972] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:21.401 [2024-07-23 04:27:30.136059] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2816531 ] 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:21.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.661 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:21.661 [2024-07-23 04:27:30.333989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.920 [2024-07-23 04:27:30.617989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.179 [2024-07-23 04:27:30.959618] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:22.179 [2024-07-23 04:27:30.959653] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:22.437 04:27:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:22.437 04:27:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # return 0 00:32:22.437 04:27:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:32:22.437 04:27:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:32:22.437 04:27:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:32:22.437 04:27:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:32:22.437 04:27:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:22.437 04:27:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:22.437 04:27:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:32:22.437 04:27:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:22.437 04:27:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:32:22.697 malloc1 00:32:22.697 04:27:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:22.956 [2024-07-23 04:27:31.615480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:22.956 [2024-07-23 04:27:31.615547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:22.956 [2024-07-23 04:27:31.615578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:32:22.956 [2024-07-23 04:27:31.615595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:22.956 [2024-07-23 04:27:31.618073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:22.956 [2024-07-23 04:27:31.618109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:22.956 pt1 00:32:22.956 04:27:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:32:22.956 04:27:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:32:22.956 04:27:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:32:22.956 04:27:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:32:22.956 04:27:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:22.956 04:27:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:22.956 04:27:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:32:22.956 04:27:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:22.956 04:27:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:32:23.215 malloc2 00:32:23.215 04:27:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:23.474 [2024-07-23 04:27:32.127087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:23.474 [2024-07-23 04:27:32.127160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:23.474 [2024-07-23 04:27:32.127189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:32:23.474 [2024-07-23 04:27:32.127204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:23.474 [2024-07-23 04:27:32.129684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:23.474 [2024-07-23 04:27:32.129716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:23.474 pt2 00:32:23.474 04:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:32:23.474 04:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:32:23.474 04:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:32:23.744 [2024-07-23 04:27:32.355708] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:23.744 [2024-07-23 04:27:32.358023] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:23.744 [2024-07-23 04:27:32.358280] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040880 00:32:23.744 [2024-07-23 04:27:32.358300] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:23.744 [2024-07-23 04:27:32.358414] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:32:23.744 [2024-07-23 04:27:32.358631] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040880 00:32:23.744 [2024-07-23 04:27:32.358649] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040880 00:32:23.744 [2024-07-23 04:27:32.358787] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:23.744 04:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:23.744 04:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:23.744 04:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:23.744 04:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:23.744 04:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:23.744 04:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:23.744 04:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:23.744 04:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:23.744 04:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:23.744 04:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:23.744 04:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:23.744 04:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:24.002 04:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:24.002 "name": "raid_bdev1", 00:32:24.002 "uuid": "00b7ea64-a8ac-4318-89eb-c8d6f8d35bfc", 00:32:24.002 "strip_size_kb": 0, 00:32:24.002 "state": "online", 00:32:24.002 "raid_level": "raid1", 00:32:24.002 "superblock": true, 00:32:24.002 "num_base_bdevs": 2, 00:32:24.002 "num_base_bdevs_discovered": 2, 00:32:24.002 "num_base_bdevs_operational": 2, 00:32:24.002 "base_bdevs_list": [ 00:32:24.002 { 00:32:24.002 "name": "pt1", 00:32:24.002 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:24.003 "is_configured": true, 00:32:24.003 "data_offset": 256, 00:32:24.003 "data_size": 7936 00:32:24.003 }, 00:32:24.003 { 00:32:24.003 "name": "pt2", 00:32:24.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:24.003 "is_configured": true, 00:32:24.003 "data_offset": 256, 00:32:24.003 "data_size": 7936 00:32:24.003 } 00:32:24.003 ] 00:32:24.003 }' 00:32:24.003 04:27:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:24.003 04:27:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:24.569 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:32:24.569 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:32:24.569 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:24.569 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:24.569 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:24.569 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:32:24.569 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:24.569 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:24.569 [2024-07-23 04:27:33.302562] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:24.569 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:24.569 "name": "raid_bdev1", 00:32:24.569 "aliases": [ 00:32:24.569 "00b7ea64-a8ac-4318-89eb-c8d6f8d35bfc" 00:32:24.569 ], 00:32:24.569 "product_name": "Raid Volume", 00:32:24.569 "block_size": 4096, 00:32:24.569 "num_blocks": 7936, 00:32:24.569 "uuid": "00b7ea64-a8ac-4318-89eb-c8d6f8d35bfc", 00:32:24.569 "md_size": 32, 00:32:24.569 "md_interleave": false, 00:32:24.569 "dif_type": 0, 00:32:24.569 "assigned_rate_limits": { 00:32:24.569 "rw_ios_per_sec": 0, 00:32:24.569 "rw_mbytes_per_sec": 0, 00:32:24.569 "r_mbytes_per_sec": 0, 00:32:24.569 "w_mbytes_per_sec": 0 00:32:24.569 }, 00:32:24.569 "claimed": false, 00:32:24.569 "zoned": false, 00:32:24.569 "supported_io_types": { 00:32:24.569 "read": true, 00:32:24.569 "write": true, 00:32:24.569 "unmap": false, 00:32:24.569 "flush": false, 00:32:24.569 "reset": true, 00:32:24.569 "nvme_admin": false, 00:32:24.569 "nvme_io": false, 00:32:24.569 "nvme_io_md": false, 00:32:24.569 "write_zeroes": true, 00:32:24.569 "zcopy": false, 00:32:24.569 "get_zone_info": false, 00:32:24.569 "zone_management": false, 00:32:24.569 "zone_append": false, 00:32:24.569 "compare": false, 00:32:24.569 "compare_and_write": false, 00:32:24.569 "abort": false, 00:32:24.569 "seek_hole": false, 00:32:24.569 "seek_data": false, 00:32:24.569 "copy": false, 00:32:24.569 "nvme_iov_md": false 00:32:24.569 }, 00:32:24.569 "memory_domains": [ 00:32:24.569 { 00:32:24.569 "dma_device_id": "system", 00:32:24.569 "dma_device_type": 1 00:32:24.569 }, 00:32:24.569 { 00:32:24.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:24.569 "dma_device_type": 2 00:32:24.569 }, 00:32:24.569 { 00:32:24.569 "dma_device_id": "system", 00:32:24.569 "dma_device_type": 1 00:32:24.569 }, 00:32:24.569 { 00:32:24.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:24.569 "dma_device_type": 2 00:32:24.569 } 00:32:24.569 ], 00:32:24.569 "driver_specific": { 00:32:24.569 "raid": { 00:32:24.569 "uuid": "00b7ea64-a8ac-4318-89eb-c8d6f8d35bfc", 00:32:24.569 "strip_size_kb": 0, 00:32:24.569 "state": "online", 00:32:24.569 "raid_level": "raid1", 00:32:24.569 "superblock": true, 00:32:24.569 "num_base_bdevs": 2, 00:32:24.569 "num_base_bdevs_discovered": 2, 00:32:24.569 "num_base_bdevs_operational": 2, 00:32:24.569 "base_bdevs_list": [ 00:32:24.569 { 00:32:24.569 "name": "pt1", 00:32:24.569 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:24.569 "is_configured": true, 00:32:24.569 "data_offset": 256, 00:32:24.569 "data_size": 7936 00:32:24.569 }, 00:32:24.569 { 00:32:24.569 "name": "pt2", 00:32:24.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:24.569 "is_configured": true, 00:32:24.569 "data_offset": 256, 00:32:24.569 "data_size": 7936 00:32:24.569 } 00:32:24.569 ] 00:32:24.569 } 00:32:24.569 } 00:32:24.569 }' 00:32:24.569 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:24.828 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:32:24.828 pt2' 00:32:24.828 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:24.828 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:32:24.828 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:24.828 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:24.828 "name": "pt1", 00:32:24.828 "aliases": [ 00:32:24.828 "00000000-0000-0000-0000-000000000001" 00:32:24.828 ], 00:32:24.828 "product_name": "passthru", 00:32:24.828 "block_size": 4096, 00:32:24.828 "num_blocks": 8192, 00:32:24.828 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:24.828 "md_size": 32, 00:32:24.828 "md_interleave": false, 00:32:24.828 "dif_type": 0, 00:32:24.828 "assigned_rate_limits": { 00:32:24.828 "rw_ios_per_sec": 0, 00:32:24.828 "rw_mbytes_per_sec": 0, 00:32:24.828 "r_mbytes_per_sec": 0, 00:32:24.828 "w_mbytes_per_sec": 0 00:32:24.828 }, 00:32:24.828 "claimed": true, 00:32:24.828 "claim_type": "exclusive_write", 00:32:24.828 "zoned": false, 00:32:24.828 "supported_io_types": { 00:32:24.828 "read": true, 00:32:24.828 "write": true, 00:32:24.828 "unmap": true, 00:32:24.828 "flush": true, 00:32:24.828 "reset": true, 00:32:24.828 "nvme_admin": false, 00:32:24.828 "nvme_io": false, 00:32:24.828 "nvme_io_md": false, 00:32:24.828 "write_zeroes": true, 00:32:24.828 "zcopy": true, 00:32:24.828 "get_zone_info": false, 00:32:24.828 "zone_management": false, 00:32:24.828 "zone_append": false, 00:32:24.828 "compare": false, 00:32:24.828 "compare_and_write": false, 00:32:24.828 "abort": true, 00:32:24.828 "seek_hole": false, 00:32:24.828 "seek_data": false, 00:32:24.828 "copy": true, 00:32:24.828 "nvme_iov_md": false 00:32:24.828 }, 00:32:24.828 "memory_domains": [ 00:32:24.828 { 00:32:24.828 "dma_device_id": "system", 00:32:24.828 "dma_device_type": 1 00:32:24.828 }, 00:32:24.828 { 00:32:24.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:24.828 "dma_device_type": 2 00:32:24.828 } 00:32:24.828 ], 00:32:24.828 "driver_specific": { 00:32:24.828 "passthru": { 00:32:24.828 "name": "pt1", 00:32:24.828 "base_bdev_name": "malloc1" 00:32:24.828 } 00:32:24.829 } 00:32:24.829 }' 00:32:24.829 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:24.829 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:24.829 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:24.829 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:25.087 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:25.087 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:25.087 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:25.087 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:25.087 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:25.087 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:25.087 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:25.087 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:25.087 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:25.087 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:32:25.087 04:27:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:25.346 04:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:25.346 "name": "pt2", 00:32:25.346 "aliases": [ 00:32:25.346 "00000000-0000-0000-0000-000000000002" 00:32:25.346 ], 00:32:25.346 "product_name": "passthru", 00:32:25.346 "block_size": 4096, 00:32:25.346 "num_blocks": 8192, 00:32:25.346 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:25.346 "md_size": 32, 00:32:25.346 "md_interleave": false, 00:32:25.346 "dif_type": 0, 00:32:25.346 "assigned_rate_limits": { 00:32:25.346 "rw_ios_per_sec": 0, 00:32:25.346 "rw_mbytes_per_sec": 0, 00:32:25.346 "r_mbytes_per_sec": 0, 00:32:25.346 "w_mbytes_per_sec": 0 00:32:25.346 }, 00:32:25.346 "claimed": true, 00:32:25.346 "claim_type": "exclusive_write", 00:32:25.346 "zoned": false, 00:32:25.346 "supported_io_types": { 00:32:25.346 "read": true, 00:32:25.346 "write": true, 00:32:25.346 "unmap": true, 00:32:25.346 "flush": true, 00:32:25.346 "reset": true, 00:32:25.346 "nvme_admin": false, 00:32:25.346 "nvme_io": false, 00:32:25.346 "nvme_io_md": false, 00:32:25.346 "write_zeroes": true, 00:32:25.346 "zcopy": true, 00:32:25.346 "get_zone_info": false, 00:32:25.346 "zone_management": false, 00:32:25.346 "zone_append": false, 00:32:25.346 "compare": false, 00:32:25.346 "compare_and_write": false, 00:32:25.346 "abort": true, 00:32:25.346 "seek_hole": false, 00:32:25.346 "seek_data": false, 00:32:25.346 "copy": true, 00:32:25.346 "nvme_iov_md": false 00:32:25.346 }, 00:32:25.346 "memory_domains": [ 00:32:25.346 { 00:32:25.346 "dma_device_id": "system", 00:32:25.346 "dma_device_type": 1 00:32:25.346 }, 00:32:25.346 { 00:32:25.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:25.346 "dma_device_type": 2 00:32:25.346 } 00:32:25.346 ], 00:32:25.346 "driver_specific": { 00:32:25.346 "passthru": { 00:32:25.346 "name": "pt2", 00:32:25.346 "base_bdev_name": "malloc2" 00:32:25.346 } 00:32:25.346 } 00:32:25.346 }' 00:32:25.346 04:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:25.346 04:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:25.346 04:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:25.346 04:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:25.606 04:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:25.606 04:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:25.606 04:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:25.606 04:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:25.606 04:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:25.606 04:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:25.606 04:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:25.863 04:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:25.863 04:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:25.863 04:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:32:25.863 [2024-07-23 04:27:34.590065] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:25.863 04:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=00b7ea64-a8ac-4318-89eb-c8d6f8d35bfc 00:32:25.863 04:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 00b7ea64-a8ac-4318-89eb-c8d6f8d35bfc ']' 00:32:25.863 04:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:26.122 [2024-07-23 04:27:34.822355] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:26.122 [2024-07-23 04:27:34.822387] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:26.122 [2024-07-23 04:27:34.822478] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:26.122 [2024-07-23 04:27:34.822547] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:26.122 [2024-07-23 04:27:34.822572] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040880 name raid_bdev1, state offline 00:32:26.122 04:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:26.122 04:27:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:32:26.381 04:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:32:26.381 04:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:32:26.381 04:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:32:26.381 04:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:32:26.640 04:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:32:26.640 04:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:26.898 04:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:32:26.898 04:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:27.157 04:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:32:27.157 04:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:32:27.157 04:27:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:32:27.157 04:27:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:32:27.157 04:27:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:27.157 04:27:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:27.157 04:27:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:27.158 04:27:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:27.158 04:27:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:27.158 04:27:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:27.158 04:27:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:27.158 04:27:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:32:27.158 04:27:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:32:27.158 [2024-07-23 04:27:35.929315] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:27.158 [2024-07-23 04:27:35.931627] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:27.158 [2024-07-23 04:27:35.931705] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:32:27.158 [2024-07-23 04:27:35.931763] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:32:27.158 [2024-07-23 04:27:35.931786] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:27.158 [2024-07-23 04:27:35.931807] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state configuring 00:32:27.158 request: 00:32:27.158 { 00:32:27.158 "name": "raid_bdev1", 00:32:27.158 "raid_level": "raid1", 00:32:27.158 "base_bdevs": [ 00:32:27.158 "malloc1", 00:32:27.158 "malloc2" 00:32:27.158 ], 00:32:27.158 "superblock": false, 00:32:27.158 "method": "bdev_raid_create", 00:32:27.158 "req_id": 1 00:32:27.158 } 00:32:27.158 Got JSON-RPC error response 00:32:27.158 response: 00:32:27.158 { 00:32:27.158 "code": -17, 00:32:27.158 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:27.158 } 00:32:27.417 04:27:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:32:27.417 04:27:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:27.417 04:27:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:27.417 04:27:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:27.417 04:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:27.417 04:27:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:32:27.417 04:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:32:27.417 04:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:32:27.417 04:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:27.676 [2024-07-23 04:27:36.310302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:27.676 [2024-07-23 04:27:36.310368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:27.676 [2024-07-23 04:27:36.310392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:32:27.676 [2024-07-23 04:27:36.310410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:27.676 [2024-07-23 04:27:36.312906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:27.676 [2024-07-23 04:27:36.312943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:27.676 [2024-07-23 04:27:36.313008] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:27.676 [2024-07-23 04:27:36.313072] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:27.676 pt1 00:32:27.676 04:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:32:27.676 04:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:27.676 04:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:27.676 04:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:27.676 04:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:27.676 04:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:27.676 04:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:27.676 04:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:27.676 04:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:27.676 04:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:27.676 04:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:27.676 04:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:27.935 04:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:27.935 "name": "raid_bdev1", 00:32:27.935 "uuid": "00b7ea64-a8ac-4318-89eb-c8d6f8d35bfc", 00:32:27.935 "strip_size_kb": 0, 00:32:27.935 "state": "configuring", 00:32:27.935 "raid_level": "raid1", 00:32:27.935 "superblock": true, 00:32:27.935 "num_base_bdevs": 2, 00:32:27.935 "num_base_bdevs_discovered": 1, 00:32:27.935 "num_base_bdevs_operational": 2, 00:32:27.935 "base_bdevs_list": [ 00:32:27.935 { 00:32:27.935 "name": "pt1", 00:32:27.935 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:27.935 "is_configured": true, 00:32:27.935 "data_offset": 256, 00:32:27.935 "data_size": 7936 00:32:27.935 }, 00:32:27.935 { 00:32:27.935 "name": null, 00:32:27.935 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:27.935 "is_configured": false, 00:32:27.935 "data_offset": 256, 00:32:27.935 "data_size": 7936 00:32:27.935 } 00:32:27.935 ] 00:32:27.935 }' 00:32:27.935 04:27:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:27.935 04:27:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:28.502 04:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:32:28.502 04:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:32:28.502 04:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:32:28.502 04:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:28.761 [2024-07-23 04:27:37.304982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:28.761 [2024-07-23 04:27:37.305049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:28.761 [2024-07-23 04:27:37.305074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041d80 00:32:28.761 [2024-07-23 04:27:37.305099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:28.761 [2024-07-23 04:27:37.305396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:28.761 [2024-07-23 04:27:37.305420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:28.761 [2024-07-23 04:27:37.305479] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:28.761 [2024-07-23 04:27:37.305508] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:28.761 [2024-07-23 04:27:37.305675] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:32:28.761 [2024-07-23 04:27:37.305693] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:28.761 [2024-07-23 04:27:37.305774] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:32:28.761 [2024-07-23 04:27:37.305964] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:32:28.761 [2024-07-23 04:27:37.305977] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:32:28.761 [2024-07-23 04:27:37.306118] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:28.761 pt2 00:32:28.761 04:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:32:28.761 04:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:32:28.761 04:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:28.761 04:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:28.761 04:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:28.761 04:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:28.761 04:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:28.761 04:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:28.761 04:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:28.761 04:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:28.761 04:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:28.761 04:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:28.761 04:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:28.761 04:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:29.021 04:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:29.021 "name": "raid_bdev1", 00:32:29.021 "uuid": "00b7ea64-a8ac-4318-89eb-c8d6f8d35bfc", 00:32:29.021 "strip_size_kb": 0, 00:32:29.021 "state": "online", 00:32:29.021 "raid_level": "raid1", 00:32:29.021 "superblock": true, 00:32:29.021 "num_base_bdevs": 2, 00:32:29.021 "num_base_bdevs_discovered": 2, 00:32:29.021 "num_base_bdevs_operational": 2, 00:32:29.021 "base_bdevs_list": [ 00:32:29.021 { 00:32:29.021 "name": "pt1", 00:32:29.021 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:29.021 "is_configured": true, 00:32:29.021 "data_offset": 256, 00:32:29.021 "data_size": 7936 00:32:29.021 }, 00:32:29.021 { 00:32:29.021 "name": "pt2", 00:32:29.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:29.021 "is_configured": true, 00:32:29.021 "data_offset": 256, 00:32:29.021 "data_size": 7936 00:32:29.021 } 00:32:29.021 ] 00:32:29.021 }' 00:32:29.021 04:27:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:29.021 04:27:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:29.589 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:32:29.589 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:32:29.589 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:29.589 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:29.590 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:29.590 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:32:29.590 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:29.590 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:29.590 [2024-07-23 04:27:38.243844] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:29.590 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:29.590 "name": "raid_bdev1", 00:32:29.590 "aliases": [ 00:32:29.590 "00b7ea64-a8ac-4318-89eb-c8d6f8d35bfc" 00:32:29.590 ], 00:32:29.590 "product_name": "Raid Volume", 00:32:29.590 "block_size": 4096, 00:32:29.590 "num_blocks": 7936, 00:32:29.590 "uuid": "00b7ea64-a8ac-4318-89eb-c8d6f8d35bfc", 00:32:29.590 "md_size": 32, 00:32:29.590 "md_interleave": false, 00:32:29.590 "dif_type": 0, 00:32:29.590 "assigned_rate_limits": { 00:32:29.590 "rw_ios_per_sec": 0, 00:32:29.590 "rw_mbytes_per_sec": 0, 00:32:29.590 "r_mbytes_per_sec": 0, 00:32:29.590 "w_mbytes_per_sec": 0 00:32:29.590 }, 00:32:29.590 "claimed": false, 00:32:29.590 "zoned": false, 00:32:29.590 "supported_io_types": { 00:32:29.590 "read": true, 00:32:29.590 "write": true, 00:32:29.590 "unmap": false, 00:32:29.590 "flush": false, 00:32:29.590 "reset": true, 00:32:29.590 "nvme_admin": false, 00:32:29.590 "nvme_io": false, 00:32:29.590 "nvme_io_md": false, 00:32:29.590 "write_zeroes": true, 00:32:29.590 "zcopy": false, 00:32:29.590 "get_zone_info": false, 00:32:29.590 "zone_management": false, 00:32:29.590 "zone_append": false, 00:32:29.590 "compare": false, 00:32:29.590 "compare_and_write": false, 00:32:29.590 "abort": false, 00:32:29.590 "seek_hole": false, 00:32:29.590 "seek_data": false, 00:32:29.590 "copy": false, 00:32:29.590 "nvme_iov_md": false 00:32:29.590 }, 00:32:29.590 "memory_domains": [ 00:32:29.590 { 00:32:29.590 "dma_device_id": "system", 00:32:29.590 "dma_device_type": 1 00:32:29.590 }, 00:32:29.590 { 00:32:29.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:29.590 "dma_device_type": 2 00:32:29.590 }, 00:32:29.590 { 00:32:29.590 "dma_device_id": "system", 00:32:29.590 "dma_device_type": 1 00:32:29.590 }, 00:32:29.590 { 00:32:29.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:29.590 "dma_device_type": 2 00:32:29.590 } 00:32:29.590 ], 00:32:29.590 "driver_specific": { 00:32:29.590 "raid": { 00:32:29.590 "uuid": "00b7ea64-a8ac-4318-89eb-c8d6f8d35bfc", 00:32:29.590 "strip_size_kb": 0, 00:32:29.590 "state": "online", 00:32:29.590 "raid_level": "raid1", 00:32:29.590 "superblock": true, 00:32:29.590 "num_base_bdevs": 2, 00:32:29.590 "num_base_bdevs_discovered": 2, 00:32:29.590 "num_base_bdevs_operational": 2, 00:32:29.590 "base_bdevs_list": [ 00:32:29.590 { 00:32:29.590 "name": "pt1", 00:32:29.590 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:29.590 "is_configured": true, 00:32:29.590 "data_offset": 256, 00:32:29.590 "data_size": 7936 00:32:29.590 }, 00:32:29.590 { 00:32:29.590 "name": "pt2", 00:32:29.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:29.590 "is_configured": true, 00:32:29.590 "data_offset": 256, 00:32:29.590 "data_size": 7936 00:32:29.590 } 00:32:29.590 ] 00:32:29.590 } 00:32:29.590 } 00:32:29.590 }' 00:32:29.590 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:29.590 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:32:29.590 pt2' 00:32:29.590 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:29.590 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:32:29.590 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:29.850 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:29.850 "name": "pt1", 00:32:29.850 "aliases": [ 00:32:29.850 "00000000-0000-0000-0000-000000000001" 00:32:29.850 ], 00:32:29.850 "product_name": "passthru", 00:32:29.850 "block_size": 4096, 00:32:29.850 "num_blocks": 8192, 00:32:29.850 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:29.850 "md_size": 32, 00:32:29.850 "md_interleave": false, 00:32:29.850 "dif_type": 0, 00:32:29.850 "assigned_rate_limits": { 00:32:29.850 "rw_ios_per_sec": 0, 00:32:29.850 "rw_mbytes_per_sec": 0, 00:32:29.850 "r_mbytes_per_sec": 0, 00:32:29.850 "w_mbytes_per_sec": 0 00:32:29.850 }, 00:32:29.850 "claimed": true, 00:32:29.850 "claim_type": "exclusive_write", 00:32:29.850 "zoned": false, 00:32:29.850 "supported_io_types": { 00:32:29.850 "read": true, 00:32:29.850 "write": true, 00:32:29.850 "unmap": true, 00:32:29.850 "flush": true, 00:32:29.850 "reset": true, 00:32:29.850 "nvme_admin": false, 00:32:29.850 "nvme_io": false, 00:32:29.850 "nvme_io_md": false, 00:32:29.850 "write_zeroes": true, 00:32:29.850 "zcopy": true, 00:32:29.850 "get_zone_info": false, 00:32:29.850 "zone_management": false, 00:32:29.850 "zone_append": false, 00:32:29.850 "compare": false, 00:32:29.850 "compare_and_write": false, 00:32:29.850 "abort": true, 00:32:29.850 "seek_hole": false, 00:32:29.850 "seek_data": false, 00:32:29.850 "copy": true, 00:32:29.850 "nvme_iov_md": false 00:32:29.850 }, 00:32:29.850 "memory_domains": [ 00:32:29.850 { 00:32:29.850 "dma_device_id": "system", 00:32:29.850 "dma_device_type": 1 00:32:29.850 }, 00:32:29.850 { 00:32:29.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:29.850 "dma_device_type": 2 00:32:29.850 } 00:32:29.850 ], 00:32:29.850 "driver_specific": { 00:32:29.850 "passthru": { 00:32:29.850 "name": "pt1", 00:32:29.850 "base_bdev_name": "malloc1" 00:32:29.850 } 00:32:29.850 } 00:32:29.850 }' 00:32:29.850 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:29.850 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:29.850 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:30.111 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:30.111 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:30.111 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:30.111 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:30.111 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:30.111 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:30.111 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:30.111 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:30.111 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:30.111 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:30.111 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:32:30.111 04:27:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:30.410 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:30.410 "name": "pt2", 00:32:30.410 "aliases": [ 00:32:30.410 "00000000-0000-0000-0000-000000000002" 00:32:30.410 ], 00:32:30.410 "product_name": "passthru", 00:32:30.410 "block_size": 4096, 00:32:30.410 "num_blocks": 8192, 00:32:30.410 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:30.410 "md_size": 32, 00:32:30.410 "md_interleave": false, 00:32:30.410 "dif_type": 0, 00:32:30.410 "assigned_rate_limits": { 00:32:30.410 "rw_ios_per_sec": 0, 00:32:30.410 "rw_mbytes_per_sec": 0, 00:32:30.410 "r_mbytes_per_sec": 0, 00:32:30.410 "w_mbytes_per_sec": 0 00:32:30.410 }, 00:32:30.410 "claimed": true, 00:32:30.410 "claim_type": "exclusive_write", 00:32:30.410 "zoned": false, 00:32:30.410 "supported_io_types": { 00:32:30.410 "read": true, 00:32:30.410 "write": true, 00:32:30.410 "unmap": true, 00:32:30.410 "flush": true, 00:32:30.410 "reset": true, 00:32:30.410 "nvme_admin": false, 00:32:30.410 "nvme_io": false, 00:32:30.410 "nvme_io_md": false, 00:32:30.410 "write_zeroes": true, 00:32:30.410 "zcopy": true, 00:32:30.410 "get_zone_info": false, 00:32:30.410 "zone_management": false, 00:32:30.410 "zone_append": false, 00:32:30.410 "compare": false, 00:32:30.410 "compare_and_write": false, 00:32:30.410 "abort": true, 00:32:30.410 "seek_hole": false, 00:32:30.410 "seek_data": false, 00:32:30.410 "copy": true, 00:32:30.410 "nvme_iov_md": false 00:32:30.410 }, 00:32:30.410 "memory_domains": [ 00:32:30.410 { 00:32:30.410 "dma_device_id": "system", 00:32:30.410 "dma_device_type": 1 00:32:30.410 }, 00:32:30.410 { 00:32:30.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:30.410 "dma_device_type": 2 00:32:30.410 } 00:32:30.410 ], 00:32:30.410 "driver_specific": { 00:32:30.410 "passthru": { 00:32:30.410 "name": "pt2", 00:32:30.410 "base_bdev_name": "malloc2" 00:32:30.410 } 00:32:30.410 } 00:32:30.410 }' 00:32:30.410 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:30.411 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:30.411 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:30.411 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:30.669 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:30.669 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:30.669 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:30.669 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:30.669 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:30.669 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:30.669 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:30.669 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:30.669 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:30.669 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:32:30.928 [2024-07-23 04:27:39.551589] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:30.928 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 00b7ea64-a8ac-4318-89eb-c8d6f8d35bfc '!=' 00b7ea64-a8ac-4318-89eb-c8d6f8d35bfc ']' 00:32:30.928 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:32:30.928 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:30.928 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:32:30.928 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:32:31.188 [2024-07-23 04:27:39.783871] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:32:31.188 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:31.188 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:31.188 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:31.188 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:31.188 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:31.188 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:31.188 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:31.188 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:31.188 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:31.188 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:31.188 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:31.188 04:27:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:31.447 04:27:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:31.447 "name": "raid_bdev1", 00:32:31.447 "uuid": "00b7ea64-a8ac-4318-89eb-c8d6f8d35bfc", 00:32:31.447 "strip_size_kb": 0, 00:32:31.447 "state": "online", 00:32:31.447 "raid_level": "raid1", 00:32:31.447 "superblock": true, 00:32:31.447 "num_base_bdevs": 2, 00:32:31.447 "num_base_bdevs_discovered": 1, 00:32:31.447 "num_base_bdevs_operational": 1, 00:32:31.447 "base_bdevs_list": [ 00:32:31.447 { 00:32:31.447 "name": null, 00:32:31.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:31.447 "is_configured": false, 00:32:31.447 "data_offset": 256, 00:32:31.447 "data_size": 7936 00:32:31.447 }, 00:32:31.447 { 00:32:31.447 "name": "pt2", 00:32:31.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:31.447 "is_configured": true, 00:32:31.447 "data_offset": 256, 00:32:31.447 "data_size": 7936 00:32:31.447 } 00:32:31.447 ] 00:32:31.447 }' 00:32:31.447 04:27:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:31.447 04:27:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:32.385 04:27:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:32.385 [2024-07-23 04:27:41.087367] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:32.385 [2024-07-23 04:27:41.087402] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:32.385 [2024-07-23 04:27:41.087490] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:32.385 [2024-07-23 04:27:41.087546] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:32.385 [2024-07-23 04:27:41.087565] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:32:32.385 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:32.385 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:32:32.644 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:32:32.644 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:32:32.644 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:32:32.644 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:32:32.644 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:32.904 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:32:32.904 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:32:32.904 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:32:32.904 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:32:32.904 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:32:32.904 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:33.163 [2024-07-23 04:27:41.769204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:33.163 [2024-07-23 04:27:41.769277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:33.163 [2024-07-23 04:27:41.769300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:32:33.163 [2024-07-23 04:27:41.769317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:33.163 [2024-07-23 04:27:41.771807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:33.163 [2024-07-23 04:27:41.771841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:33.163 [2024-07-23 04:27:41.771898] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:33.163 [2024-07-23 04:27:41.771967] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:33.163 [2024-07-23 04:27:41.772111] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042680 00:32:33.163 [2024-07-23 04:27:41.772128] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:33.163 [2024-07-23 04:27:41.772217] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:32:33.163 [2024-07-23 04:27:41.772414] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042680 00:32:33.163 [2024-07-23 04:27:41.772427] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042680 00:32:33.163 [2024-07-23 04:27:41.772566] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:33.163 pt2 00:32:33.163 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:33.163 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:33.163 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:33.163 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:33.163 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:33.163 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:33.163 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:33.163 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:33.163 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:33.163 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:33.163 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:33.163 04:27:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:33.422 04:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:33.422 "name": "raid_bdev1", 00:32:33.422 "uuid": "00b7ea64-a8ac-4318-89eb-c8d6f8d35bfc", 00:32:33.422 "strip_size_kb": 0, 00:32:33.423 "state": "online", 00:32:33.423 "raid_level": "raid1", 00:32:33.423 "superblock": true, 00:32:33.423 "num_base_bdevs": 2, 00:32:33.423 "num_base_bdevs_discovered": 1, 00:32:33.423 "num_base_bdevs_operational": 1, 00:32:33.423 "base_bdevs_list": [ 00:32:33.423 { 00:32:33.423 "name": null, 00:32:33.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:33.423 "is_configured": false, 00:32:33.423 "data_offset": 256, 00:32:33.423 "data_size": 7936 00:32:33.423 }, 00:32:33.423 { 00:32:33.423 "name": "pt2", 00:32:33.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:33.423 "is_configured": true, 00:32:33.423 "data_offset": 256, 00:32:33.423 "data_size": 7936 00:32:33.423 } 00:32:33.423 ] 00:32:33.423 }' 00:32:33.423 04:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:33.423 04:27:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:33.991 04:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:34.250 [2024-07-23 04:27:42.787942] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:34.250 [2024-07-23 04:27:42.787976] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:34.250 [2024-07-23 04:27:42.788051] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:34.250 [2024-07-23 04:27:42.788111] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:34.250 [2024-07-23 04:27:42.788127] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state offline 00:32:34.250 04:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:34.250 04:27:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:32:34.819 04:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:32:34.819 04:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:32:34.819 04:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:32:34.819 04:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:34.819 [2024-07-23 04:27:43.457682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:34.819 [2024-07-23 04:27:43.457741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:34.819 [2024-07-23 04:27:43.457768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:32:34.819 [2024-07-23 04:27:43.457783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:34.819 [2024-07-23 04:27:43.460334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:34.819 [2024-07-23 04:27:43.460369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:34.819 [2024-07-23 04:27:43.460437] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:34.819 [2024-07-23 04:27:43.460498] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:34.819 [2024-07-23 04:27:43.460725] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:32:34.819 [2024-07-23 04:27:43.460743] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:34.819 [2024-07-23 04:27:43.460768] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042f80 name raid_bdev1, state configuring 00:32:34.819 [2024-07-23 04:27:43.460859] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:34.819 [2024-07-23 04:27:43.460943] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:32:34.819 [2024-07-23 04:27:43.460957] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:34.819 [2024-07-23 04:27:43.461038] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:32:34.819 [2024-07-23 04:27:43.461234] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:32:34.819 [2024-07-23 04:27:43.461252] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:32:34.819 [2024-07-23 04:27:43.461383] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:34.819 pt1 00:32:34.819 04:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:32:34.819 04:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:34.819 04:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:34.819 04:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:34.819 04:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:34.819 04:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:34.819 04:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:34.819 04:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:34.819 04:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:34.819 04:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:34.819 04:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:34.819 04:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:34.819 04:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:35.079 04:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:35.079 "name": "raid_bdev1", 00:32:35.079 "uuid": "00b7ea64-a8ac-4318-89eb-c8d6f8d35bfc", 00:32:35.079 "strip_size_kb": 0, 00:32:35.079 "state": "online", 00:32:35.079 "raid_level": "raid1", 00:32:35.079 "superblock": true, 00:32:35.079 "num_base_bdevs": 2, 00:32:35.079 "num_base_bdevs_discovered": 1, 00:32:35.079 "num_base_bdevs_operational": 1, 00:32:35.079 "base_bdevs_list": [ 00:32:35.079 { 00:32:35.079 "name": null, 00:32:35.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:35.079 "is_configured": false, 00:32:35.079 "data_offset": 256, 00:32:35.079 "data_size": 7936 00:32:35.079 }, 00:32:35.079 { 00:32:35.079 "name": "pt2", 00:32:35.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:35.079 "is_configured": true, 00:32:35.079 "data_offset": 256, 00:32:35.079 "data_size": 7936 00:32:35.079 } 00:32:35.079 ] 00:32:35.079 }' 00:32:35.079 04:27:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:35.079 04:27:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:36.017 04:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:32:36.017 04:27:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:32:36.585 04:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:32:36.585 04:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:36.585 04:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:32:36.585 [2024-07-23 04:27:45.287055] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:36.585 04:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 00b7ea64-a8ac-4318-89eb-c8d6f8d35bfc '!=' 00b7ea64-a8ac-4318-89eb-c8d6f8d35bfc ']' 00:32:36.585 04:27:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 2816531 00:32:36.585 04:27:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@948 -- # '[' -z 2816531 ']' 00:32:36.585 04:27:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # kill -0 2816531 00:32:36.585 04:27:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # uname 00:32:36.585 04:27:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:36.585 04:27:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2816531 00:32:36.844 04:27:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:36.844 04:27:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:36.844 04:27:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2816531' 00:32:36.844 killing process with pid 2816531 00:32:36.844 04:27:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # kill 2816531 00:32:36.845 04:27:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # wait 2816531 00:32:36.845 [2024-07-23 04:27:45.387926] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:36.845 [2024-07-23 04:27:45.388022] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:36.845 [2024-07-23 04:27:45.388078] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:36.845 [2024-07-23 04:27:45.388097] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:32:37.104 [2024-07-23 04:27:45.674699] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:39.009 04:27:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:32:39.009 00:32:39.009 real 0m17.213s 00:32:39.009 user 0m29.577s 00:32:39.009 sys 0m2.850s 00:32:39.009 04:27:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:39.009 04:27:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:39.009 ************************************ 00:32:39.010 END TEST raid_superblock_test_md_separate 00:32:39.010 ************************************ 00:32:39.010 04:27:47 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:32:39.010 04:27:47 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' true = true ']' 00:32:39.010 04:27:47 bdev_raid -- bdev/bdev_raid.sh@908 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:32:39.010 04:27:47 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:32:39.010 04:27:47 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:39.010 04:27:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:39.010 ************************************ 00:32:39.010 START TEST raid_rebuild_test_sb_md_separate 00:32:39.010 ************************************ 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local verify=true 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local strip_size 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local create_arg 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local data_offset 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # raid_pid=2819681 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # waitforlisten 2819681 /var/tmp/spdk-raid.sock 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 2819681 ']' 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:39.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:39.010 04:27:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:39.010 [2024-07-23 04:27:47.457917] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:39.010 [2024-07-23 04:27:47.458039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819681 ] 00:32:39.010 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:39.010 Zero copy mechanism will not be used. 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:39.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.010 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:39.010 [2024-07-23 04:27:47.681820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.269 [2024-07-23 04:27:47.948921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.529 [2024-07-23 04:27:48.273962] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:39.529 [2024-07-23 04:27:48.273996] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:40.097 04:27:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:40.097 04:27:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:32:40.097 04:27:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:40.097 04:27:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:32:40.097 BaseBdev1_malloc 00:32:40.097 04:27:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:40.666 [2024-07-23 04:27:49.349681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:40.666 [2024-07-23 04:27:49.349752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:40.666 [2024-07-23 04:27:49.349785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:32:40.666 [2024-07-23 04:27:49.349804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:40.666 [2024-07-23 04:27:49.352314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:40.666 [2024-07-23 04:27:49.352351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:40.666 BaseBdev1 00:32:40.666 04:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:40.666 04:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:32:40.925 BaseBdev2_malloc 00:32:40.925 04:27:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:41.493 [2024-07-23 04:27:50.127122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:41.493 [2024-07-23 04:27:50.127201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:41.493 [2024-07-23 04:27:50.127229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:32:41.493 [2024-07-23 04:27:50.127251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:41.493 [2024-07-23 04:27:50.129781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:41.493 [2024-07-23 04:27:50.129820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:41.493 BaseBdev2 00:32:41.493 04:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:32:41.752 spare_malloc 00:32:41.752 04:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:42.320 spare_delay 00:32:42.320 04:27:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:42.579 [2024-07-23 04:27:51.142301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:42.579 [2024-07-23 04:27:51.142361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:42.579 [2024-07-23 04:27:51.142394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:32:42.579 [2024-07-23 04:27:51.142413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:42.579 [2024-07-23 04:27:51.144989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:42.579 [2024-07-23 04:27:51.145026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:42.579 spare 00:32:42.579 04:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:32:42.579 [2024-07-23 04:27:51.358923] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:42.579 [2024-07-23 04:27:51.361273] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:42.579 [2024-07-23 04:27:51.361523] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:32:42.579 [2024-07-23 04:27:51.361547] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:42.579 [2024-07-23 04:27:51.361650] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:32:42.579 [2024-07-23 04:27:51.361865] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:32:42.579 [2024-07-23 04:27:51.361880] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:32:42.579 [2024-07-23 04:27:51.362022] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:42.839 04:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:42.839 04:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:42.839 04:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:42.839 04:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:42.839 04:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:42.839 04:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:42.839 04:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:42.839 04:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:42.839 04:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:42.839 04:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:42.839 04:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:42.839 04:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:42.839 04:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:42.839 "name": "raid_bdev1", 00:32:42.839 "uuid": "906250d1-331c-484d-8b4c-05e1eab8581e", 00:32:42.839 "strip_size_kb": 0, 00:32:42.839 "state": "online", 00:32:42.839 "raid_level": "raid1", 00:32:42.839 "superblock": true, 00:32:42.839 "num_base_bdevs": 2, 00:32:42.839 "num_base_bdevs_discovered": 2, 00:32:42.839 "num_base_bdevs_operational": 2, 00:32:42.839 "base_bdevs_list": [ 00:32:42.839 { 00:32:42.839 "name": "BaseBdev1", 00:32:42.839 "uuid": "b1602933-322e-55f0-913d-c39bc40973eb", 00:32:42.839 "is_configured": true, 00:32:42.839 "data_offset": 256, 00:32:42.839 "data_size": 7936 00:32:42.839 }, 00:32:42.839 { 00:32:42.839 "name": "BaseBdev2", 00:32:42.839 "uuid": "46c1a120-58a3-5913-9de3-156d5b82020f", 00:32:42.839 "is_configured": true, 00:32:42.839 "data_offset": 256, 00:32:42.839 "data_size": 7936 00:32:42.839 } 00:32:42.839 ] 00:32:42.839 }' 00:32:42.839 04:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:42.839 04:27:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:43.408 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:43.408 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:32:43.667 [2024-07-23 04:27:52.369951] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:43.667 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:32:43.667 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:43.667 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:43.926 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:32:43.926 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:32:43.926 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:32:43.926 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:32:43.926 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:32:43.926 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:43.926 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:43.926 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:43.926 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:43.926 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:43.927 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:32:43.927 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:43.927 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:43.927 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:44.186 [2024-07-23 04:27:52.814860] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:32:44.186 /dev/nbd0 00:32:44.186 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:44.186 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:44.186 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:32:44.186 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:32:44.186 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:32:44.186 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:32:44.186 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:32:44.186 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:32:44.186 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:32:44.186 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:32:44.186 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:44.186 1+0 records in 00:32:44.186 1+0 records out 00:32:44.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000157517 s, 26.0 MB/s 00:32:44.186 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:44.186 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:32:44.186 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:44.186 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:32:44.186 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:32:44.186 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:44.186 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:44.186 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:32:44.186 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:32:44.186 04:27:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:32:45.127 7936+0 records in 00:32:45.127 7936+0 records out 00:32:45.127 32505856 bytes (33 MB, 31 MiB) copied, 0.797364 s, 40.8 MB/s 00:32:45.127 04:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:32:45.127 04:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:45.127 04:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:45.127 04:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:45.127 04:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:32:45.127 04:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:45.127 04:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:45.127 04:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:45.127 04:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:45.127 04:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:45.127 04:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:45.127 04:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:45.127 04:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:45.127 [2024-07-23 04:27:53.895270] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:45.127 04:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:32:45.127 04:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:32:45.127 04:27:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:32:45.385 [2024-07-23 04:27:54.107925] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:45.385 04:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:45.385 04:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:45.385 04:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:45.385 04:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:45.385 04:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:45.385 04:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:45.385 04:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:45.385 04:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:45.385 04:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:45.385 04:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:45.385 04:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:45.385 04:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:45.656 04:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:45.656 "name": "raid_bdev1", 00:32:45.656 "uuid": "906250d1-331c-484d-8b4c-05e1eab8581e", 00:32:45.656 "strip_size_kb": 0, 00:32:45.656 "state": "online", 00:32:45.656 "raid_level": "raid1", 00:32:45.656 "superblock": true, 00:32:45.656 "num_base_bdevs": 2, 00:32:45.656 "num_base_bdevs_discovered": 1, 00:32:45.656 "num_base_bdevs_operational": 1, 00:32:45.656 "base_bdevs_list": [ 00:32:45.656 { 00:32:45.656 "name": null, 00:32:45.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:45.656 "is_configured": false, 00:32:45.656 "data_offset": 256, 00:32:45.656 "data_size": 7936 00:32:45.656 }, 00:32:45.656 { 00:32:45.656 "name": "BaseBdev2", 00:32:45.656 "uuid": "46c1a120-58a3-5913-9de3-156d5b82020f", 00:32:45.656 "is_configured": true, 00:32:45.656 "data_offset": 256, 00:32:45.656 "data_size": 7936 00:32:45.656 } 00:32:45.656 ] 00:32:45.656 }' 00:32:45.656 04:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:45.656 04:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:46.271 04:27:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:46.530 [2024-07-23 04:27:55.090579] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:46.530 [2024-07-23 04:27:55.112629] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001a4410 00:32:46.530 [2024-07-23 04:27:55.114935] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:46.530 04:27:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # sleep 1 00:32:47.467 04:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:47.467 04:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:47.467 04:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:47.467 04:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:47.467 04:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:47.467 04:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:47.467 04:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:47.727 04:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:47.727 "name": "raid_bdev1", 00:32:47.727 "uuid": "906250d1-331c-484d-8b4c-05e1eab8581e", 00:32:47.727 "strip_size_kb": 0, 00:32:47.727 "state": "online", 00:32:47.727 "raid_level": "raid1", 00:32:47.727 "superblock": true, 00:32:47.727 "num_base_bdevs": 2, 00:32:47.727 "num_base_bdevs_discovered": 2, 00:32:47.727 "num_base_bdevs_operational": 2, 00:32:47.727 "process": { 00:32:47.727 "type": "rebuild", 00:32:47.727 "target": "spare", 00:32:47.727 "progress": { 00:32:47.727 "blocks": 3072, 00:32:47.727 "percent": 38 00:32:47.727 } 00:32:47.727 }, 00:32:47.727 "base_bdevs_list": [ 00:32:47.727 { 00:32:47.727 "name": "spare", 00:32:47.727 "uuid": "2a2a6ef3-a571-5707-a050-2f991862e6c9", 00:32:47.727 "is_configured": true, 00:32:47.727 "data_offset": 256, 00:32:47.727 "data_size": 7936 00:32:47.727 }, 00:32:47.727 { 00:32:47.727 "name": "BaseBdev2", 00:32:47.727 "uuid": "46c1a120-58a3-5913-9de3-156d5b82020f", 00:32:47.727 "is_configured": true, 00:32:47.727 "data_offset": 256, 00:32:47.727 "data_size": 7936 00:32:47.727 } 00:32:47.727 ] 00:32:47.727 }' 00:32:47.727 04:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:47.727 04:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:47.727 04:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:47.727 04:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:47.727 04:27:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:48.295 [2024-07-23 04:27:56.970342] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:48.295 [2024-07-23 04:27:57.030470] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:48.295 [2024-07-23 04:27:57.030530] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:48.295 [2024-07-23 04:27:57.030551] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:48.295 [2024-07-23 04:27:57.030575] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:48.555 04:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:48.555 04:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:48.555 04:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:48.555 04:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:48.555 04:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:48.555 04:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:48.555 04:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:48.555 04:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:48.555 04:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:48.555 04:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:48.555 04:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:48.555 04:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:48.555 04:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:48.555 "name": "raid_bdev1", 00:32:48.555 "uuid": "906250d1-331c-484d-8b4c-05e1eab8581e", 00:32:48.555 "strip_size_kb": 0, 00:32:48.555 "state": "online", 00:32:48.555 "raid_level": "raid1", 00:32:48.555 "superblock": true, 00:32:48.555 "num_base_bdevs": 2, 00:32:48.555 "num_base_bdevs_discovered": 1, 00:32:48.555 "num_base_bdevs_operational": 1, 00:32:48.555 "base_bdevs_list": [ 00:32:48.555 { 00:32:48.555 "name": null, 00:32:48.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:48.555 "is_configured": false, 00:32:48.555 "data_offset": 256, 00:32:48.555 "data_size": 7936 00:32:48.555 }, 00:32:48.555 { 00:32:48.555 "name": "BaseBdev2", 00:32:48.555 "uuid": "46c1a120-58a3-5913-9de3-156d5b82020f", 00:32:48.555 "is_configured": true, 00:32:48.555 "data_offset": 256, 00:32:48.555 "data_size": 7936 00:32:48.555 } 00:32:48.555 ] 00:32:48.555 }' 00:32:48.555 04:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:48.555 04:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:49.125 04:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:49.125 04:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:49.125 04:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:49.125 04:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:49.125 04:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:49.125 04:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:49.125 04:27:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:49.384 04:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:49.384 "name": "raid_bdev1", 00:32:49.384 "uuid": "906250d1-331c-484d-8b4c-05e1eab8581e", 00:32:49.384 "strip_size_kb": 0, 00:32:49.384 "state": "online", 00:32:49.384 "raid_level": "raid1", 00:32:49.384 "superblock": true, 00:32:49.384 "num_base_bdevs": 2, 00:32:49.384 "num_base_bdevs_discovered": 1, 00:32:49.384 "num_base_bdevs_operational": 1, 00:32:49.384 "base_bdevs_list": [ 00:32:49.384 { 00:32:49.384 "name": null, 00:32:49.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:49.384 "is_configured": false, 00:32:49.384 "data_offset": 256, 00:32:49.384 "data_size": 7936 00:32:49.384 }, 00:32:49.384 { 00:32:49.384 "name": "BaseBdev2", 00:32:49.384 "uuid": "46c1a120-58a3-5913-9de3-156d5b82020f", 00:32:49.384 "is_configured": true, 00:32:49.384 "data_offset": 256, 00:32:49.384 "data_size": 7936 00:32:49.384 } 00:32:49.384 ] 00:32:49.384 }' 00:32:49.384 04:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:49.384 04:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:49.385 04:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:49.644 04:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:49.644 04:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:49.903 [2024-07-23 04:27:58.678610] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:50.162 [2024-07-23 04:27:58.702796] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001a44e0 00:32:50.163 [2024-07-23 04:27:58.705149] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:50.163 04:27:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:32:51.100 04:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:51.100 04:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:51.101 04:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:51.101 04:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:51.101 04:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:51.101 04:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:51.101 04:27:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:51.670 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:51.670 "name": "raid_bdev1", 00:32:51.670 "uuid": "906250d1-331c-484d-8b4c-05e1eab8581e", 00:32:51.670 "strip_size_kb": 0, 00:32:51.670 "state": "online", 00:32:51.670 "raid_level": "raid1", 00:32:51.670 "superblock": true, 00:32:51.670 "num_base_bdevs": 2, 00:32:51.670 "num_base_bdevs_discovered": 2, 00:32:51.670 "num_base_bdevs_operational": 2, 00:32:51.670 "process": { 00:32:51.670 "type": "rebuild", 00:32:51.670 "target": "spare", 00:32:51.670 "progress": { 00:32:51.670 "blocks": 3840, 00:32:51.670 "percent": 48 00:32:51.670 } 00:32:51.670 }, 00:32:51.670 "base_bdevs_list": [ 00:32:51.670 { 00:32:51.670 "name": "spare", 00:32:51.670 "uuid": "2a2a6ef3-a571-5707-a050-2f991862e6c9", 00:32:51.670 "is_configured": true, 00:32:51.670 "data_offset": 256, 00:32:51.670 "data_size": 7936 00:32:51.670 }, 00:32:51.670 { 00:32:51.670 "name": "BaseBdev2", 00:32:51.670 "uuid": "46c1a120-58a3-5913-9de3-156d5b82020f", 00:32:51.670 "is_configured": true, 00:32:51.670 "data_offset": 256, 00:32:51.670 "data_size": 7936 00:32:51.670 } 00:32:51.670 ] 00:32:51.670 }' 00:32:51.670 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:51.670 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:51.670 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:51.670 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:51.670 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:32:51.670 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:32:51.670 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:32:51.670 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:32:51.670 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:32:51.670 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:32:51.670 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@705 -- # local timeout=1166 00:32:51.670 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:51.670 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:51.670 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:51.670 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:51.670 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:51.670 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:51.670 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:51.670 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:51.929 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:51.929 "name": "raid_bdev1", 00:32:51.929 "uuid": "906250d1-331c-484d-8b4c-05e1eab8581e", 00:32:51.929 "strip_size_kb": 0, 00:32:51.929 "state": "online", 00:32:51.929 "raid_level": "raid1", 00:32:51.929 "superblock": true, 00:32:51.929 "num_base_bdevs": 2, 00:32:51.929 "num_base_bdevs_discovered": 2, 00:32:51.929 "num_base_bdevs_operational": 2, 00:32:51.929 "process": { 00:32:51.929 "type": "rebuild", 00:32:51.929 "target": "spare", 00:32:51.929 "progress": { 00:32:51.929 "blocks": 4608, 00:32:51.929 "percent": 58 00:32:51.929 } 00:32:51.929 }, 00:32:51.929 "base_bdevs_list": [ 00:32:51.929 { 00:32:51.929 "name": "spare", 00:32:51.929 "uuid": "2a2a6ef3-a571-5707-a050-2f991862e6c9", 00:32:51.929 "is_configured": true, 00:32:51.929 "data_offset": 256, 00:32:51.929 "data_size": 7936 00:32:51.929 }, 00:32:51.929 { 00:32:51.929 "name": "BaseBdev2", 00:32:51.929 "uuid": "46c1a120-58a3-5913-9de3-156d5b82020f", 00:32:51.929 "is_configured": true, 00:32:51.929 "data_offset": 256, 00:32:51.929 "data_size": 7936 00:32:51.929 } 00:32:51.929 ] 00:32:51.929 }' 00:32:51.929 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:51.929 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:51.929 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:51.929 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:51.929 04:28:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:32:53.307 04:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:53.307 04:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:53.307 04:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:53.307 04:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:53.307 04:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:53.307 04:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:53.307 04:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:53.307 04:28:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:53.307 [2024-07-23 04:28:01.830387] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:53.307 [2024-07-23 04:28:01.830473] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:53.307 [2024-07-23 04:28:01.830576] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:53.566 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:53.566 "name": "raid_bdev1", 00:32:53.566 "uuid": "906250d1-331c-484d-8b4c-05e1eab8581e", 00:32:53.566 "strip_size_kb": 0, 00:32:53.566 "state": "online", 00:32:53.566 "raid_level": "raid1", 00:32:53.566 "superblock": true, 00:32:53.566 "num_base_bdevs": 2, 00:32:53.566 "num_base_bdevs_discovered": 2, 00:32:53.566 "num_base_bdevs_operational": 2, 00:32:53.566 "base_bdevs_list": [ 00:32:53.566 { 00:32:53.566 "name": "spare", 00:32:53.566 "uuid": "2a2a6ef3-a571-5707-a050-2f991862e6c9", 00:32:53.566 "is_configured": true, 00:32:53.566 "data_offset": 256, 00:32:53.566 "data_size": 7936 00:32:53.566 }, 00:32:53.567 { 00:32:53.567 "name": "BaseBdev2", 00:32:53.567 "uuid": "46c1a120-58a3-5913-9de3-156d5b82020f", 00:32:53.567 "is_configured": true, 00:32:53.567 "data_offset": 256, 00:32:53.567 "data_size": 7936 00:32:53.567 } 00:32:53.567 ] 00:32:53.567 }' 00:32:53.567 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:53.567 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:53.567 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:53.567 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:32:53.567 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # break 00:32:53.567 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:53.567 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:53.567 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:53.567 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:53.567 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:53.567 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:53.567 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:53.826 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:53.826 "name": "raid_bdev1", 00:32:53.826 "uuid": "906250d1-331c-484d-8b4c-05e1eab8581e", 00:32:53.826 "strip_size_kb": 0, 00:32:53.826 "state": "online", 00:32:53.826 "raid_level": "raid1", 00:32:53.826 "superblock": true, 00:32:53.826 "num_base_bdevs": 2, 00:32:53.826 "num_base_bdevs_discovered": 2, 00:32:53.826 "num_base_bdevs_operational": 2, 00:32:53.826 "base_bdevs_list": [ 00:32:53.826 { 00:32:53.826 "name": "spare", 00:32:53.826 "uuid": "2a2a6ef3-a571-5707-a050-2f991862e6c9", 00:32:53.826 "is_configured": true, 00:32:53.826 "data_offset": 256, 00:32:53.826 "data_size": 7936 00:32:53.826 }, 00:32:53.826 { 00:32:53.826 "name": "BaseBdev2", 00:32:53.826 "uuid": "46c1a120-58a3-5913-9de3-156d5b82020f", 00:32:53.826 "is_configured": true, 00:32:53.826 "data_offset": 256, 00:32:53.826 "data_size": 7936 00:32:53.826 } 00:32:53.826 ] 00:32:53.826 }' 00:32:53.826 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:53.826 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:53.826 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:54.086 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:54.086 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:54.086 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:54.086 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:54.086 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:54.086 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:54.086 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:54.086 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:54.086 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:54.086 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:54.086 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:54.086 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:54.086 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:54.086 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:54.086 "name": "raid_bdev1", 00:32:54.086 "uuid": "906250d1-331c-484d-8b4c-05e1eab8581e", 00:32:54.086 "strip_size_kb": 0, 00:32:54.086 "state": "online", 00:32:54.086 "raid_level": "raid1", 00:32:54.086 "superblock": true, 00:32:54.086 "num_base_bdevs": 2, 00:32:54.086 "num_base_bdevs_discovered": 2, 00:32:54.086 "num_base_bdevs_operational": 2, 00:32:54.086 "base_bdevs_list": [ 00:32:54.086 { 00:32:54.086 "name": "spare", 00:32:54.086 "uuid": "2a2a6ef3-a571-5707-a050-2f991862e6c9", 00:32:54.086 "is_configured": true, 00:32:54.086 "data_offset": 256, 00:32:54.086 "data_size": 7936 00:32:54.086 }, 00:32:54.086 { 00:32:54.086 "name": "BaseBdev2", 00:32:54.086 "uuid": "46c1a120-58a3-5913-9de3-156d5b82020f", 00:32:54.086 "is_configured": true, 00:32:54.086 "data_offset": 256, 00:32:54.086 "data_size": 7936 00:32:54.086 } 00:32:54.086 ] 00:32:54.086 }' 00:32:54.086 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:54.086 04:28:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:54.654 04:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:54.913 [2024-07-23 04:28:03.616080] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:54.913 [2024-07-23 04:28:03.616118] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:54.913 [2024-07-23 04:28:03.616207] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:54.913 [2024-07-23 04:28:03.616290] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:54.913 [2024-07-23 04:28:03.616308] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:32:54.913 04:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:54.913 04:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # jq length 00:32:55.172 04:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:32:55.172 04:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:32:55.172 04:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:32:55.172 04:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:55.172 04:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:55.172 04:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:32:55.172 04:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:55.172 04:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:55.172 04:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:55.172 04:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:32:55.172 04:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:55.172 04:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:55.172 04:28:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:55.431 /dev/nbd0 00:32:55.431 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:55.431 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:55.431 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:32:55.431 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:32:55.431 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:32:55.431 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:32:55.431 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:32:55.431 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:32:55.431 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:32:55.431 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:32:55.431 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:55.431 1+0 records in 00:32:55.431 1+0 records out 00:32:55.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264632 s, 15.5 MB/s 00:32:55.431 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:55.431 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:32:55.431 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:55.431 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:32:55.431 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:32:55.431 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:55.431 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:55.431 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:32:55.691 /dev/nbd1 00:32:55.691 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:55.691 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:55.691 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:32:55.691 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:32:55.691 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:32:55.691 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:32:55.691 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:32:55.691 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:32:55.691 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:32:55.691 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:32:55.691 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:55.691 1+0 records in 00:32:55.691 1+0 records out 00:32:55.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323411 s, 12.7 MB/s 00:32:55.691 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:55.691 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:32:55.691 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:55.691 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:32:55.691 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:32:55.691 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:55.691 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:55.691 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:32:55.951 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:32:55.951 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:55.951 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:55.951 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:55.951 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:32:55.951 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:55.951 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:56.210 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:56.210 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:56.210 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:56.210 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:56.210 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:56.210 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:56.210 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:32:56.210 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:32:56.210 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:56.210 04:28:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:32:56.469 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:56.469 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:56.469 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:56.469 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:56.469 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:56.469 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:56.469 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:32:56.469 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:32:56.469 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:32:56.469 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:56.728 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:56.987 [2024-07-23 04:28:05.573278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:56.987 [2024-07-23 04:28:05.573332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:56.987 [2024-07-23 04:28:05.573362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043280 00:32:56.987 [2024-07-23 04:28:05.573377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:56.987 [2024-07-23 04:28:05.575940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:56.987 [2024-07-23 04:28:05.575973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:56.987 [2024-07-23 04:28:05.576049] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:56.987 [2024-07-23 04:28:05.576129] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:56.987 [2024-07-23 04:28:05.576335] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:56.987 spare 00:32:56.987 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:56.987 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:56.987 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:56.987 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:56.987 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:56.987 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:56.987 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:56.987 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:56.987 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:56.987 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:56.987 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:56.987 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:56.987 [2024-07-23 04:28:05.676701] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043880 00:32:56.987 [2024-07-23 04:28:05.676734] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:56.987 [2024-07-23 04:28:05.676835] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9390 00:32:56.987 [2024-07-23 04:28:05.677052] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043880 00:32:56.987 [2024-07-23 04:28:05.677067] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043880 00:32:56.987 [2024-07-23 04:28:05.677227] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:57.247 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:57.247 "name": "raid_bdev1", 00:32:57.247 "uuid": "906250d1-331c-484d-8b4c-05e1eab8581e", 00:32:57.247 "strip_size_kb": 0, 00:32:57.247 "state": "online", 00:32:57.247 "raid_level": "raid1", 00:32:57.247 "superblock": true, 00:32:57.247 "num_base_bdevs": 2, 00:32:57.247 "num_base_bdevs_discovered": 2, 00:32:57.247 "num_base_bdevs_operational": 2, 00:32:57.247 "base_bdevs_list": [ 00:32:57.247 { 00:32:57.247 "name": "spare", 00:32:57.247 "uuid": "2a2a6ef3-a571-5707-a050-2f991862e6c9", 00:32:57.247 "is_configured": true, 00:32:57.247 "data_offset": 256, 00:32:57.247 "data_size": 7936 00:32:57.247 }, 00:32:57.247 { 00:32:57.247 "name": "BaseBdev2", 00:32:57.247 "uuid": "46c1a120-58a3-5913-9de3-156d5b82020f", 00:32:57.247 "is_configured": true, 00:32:57.247 "data_offset": 256, 00:32:57.247 "data_size": 7936 00:32:57.247 } 00:32:57.247 ] 00:32:57.247 }' 00:32:57.247 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:57.247 04:28:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:57.814 04:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:57.814 04:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:57.814 04:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:57.814 04:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:57.814 04:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:57.814 04:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:57.814 04:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:58.073 04:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:58.073 "name": "raid_bdev1", 00:32:58.073 "uuid": "906250d1-331c-484d-8b4c-05e1eab8581e", 00:32:58.073 "strip_size_kb": 0, 00:32:58.073 "state": "online", 00:32:58.073 "raid_level": "raid1", 00:32:58.073 "superblock": true, 00:32:58.073 "num_base_bdevs": 2, 00:32:58.073 "num_base_bdevs_discovered": 2, 00:32:58.073 "num_base_bdevs_operational": 2, 00:32:58.073 "base_bdevs_list": [ 00:32:58.073 { 00:32:58.073 "name": "spare", 00:32:58.073 "uuid": "2a2a6ef3-a571-5707-a050-2f991862e6c9", 00:32:58.073 "is_configured": true, 00:32:58.073 "data_offset": 256, 00:32:58.073 "data_size": 7936 00:32:58.073 }, 00:32:58.073 { 00:32:58.073 "name": "BaseBdev2", 00:32:58.073 "uuid": "46c1a120-58a3-5913-9de3-156d5b82020f", 00:32:58.073 "is_configured": true, 00:32:58.073 "data_offset": 256, 00:32:58.073 "data_size": 7936 00:32:58.073 } 00:32:58.073 ] 00:32:58.073 }' 00:32:58.073 04:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:58.073 04:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:58.073 04:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:58.073 04:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:58.073 04:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:58.073 04:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:32:58.331 04:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:32:58.331 04:28:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:58.590 [2024-07-23 04:28:07.133613] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:58.590 04:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:58.590 04:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:58.590 04:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:58.590 04:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:58.590 04:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:58.590 04:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:58.590 04:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:58.590 04:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:58.590 04:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:58.590 04:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:58.590 04:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:58.590 04:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:58.849 04:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:58.849 "name": "raid_bdev1", 00:32:58.849 "uuid": "906250d1-331c-484d-8b4c-05e1eab8581e", 00:32:58.849 "strip_size_kb": 0, 00:32:58.849 "state": "online", 00:32:58.849 "raid_level": "raid1", 00:32:58.849 "superblock": true, 00:32:58.849 "num_base_bdevs": 2, 00:32:58.849 "num_base_bdevs_discovered": 1, 00:32:58.849 "num_base_bdevs_operational": 1, 00:32:58.849 "base_bdevs_list": [ 00:32:58.849 { 00:32:58.849 "name": null, 00:32:58.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:58.849 "is_configured": false, 00:32:58.849 "data_offset": 256, 00:32:58.849 "data_size": 7936 00:32:58.849 }, 00:32:58.849 { 00:32:58.849 "name": "BaseBdev2", 00:32:58.849 "uuid": "46c1a120-58a3-5913-9de3-156d5b82020f", 00:32:58.849 "is_configured": true, 00:32:58.849 "data_offset": 256, 00:32:58.849 "data_size": 7936 00:32:58.849 } 00:32:58.849 ] 00:32:58.849 }' 00:32:58.849 04:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:58.849 04:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:59.417 04:28:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:59.417 [2024-07-23 04:28:08.160411] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:59.417 [2024-07-23 04:28:08.160617] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:59.417 [2024-07-23 04:28:08.160642] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:59.417 [2024-07-23 04:28:08.160678] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:59.417 [2024-07-23 04:28:08.184695] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9460 00:32:59.417 [2024-07-23 04:28:08.187016] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:59.715 04:28:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # sleep 1 00:33:00.660 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:00.660 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:00.660 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:00.660 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:00.660 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:00.660 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:00.660 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:00.660 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:00.660 "name": "raid_bdev1", 00:33:00.660 "uuid": "906250d1-331c-484d-8b4c-05e1eab8581e", 00:33:00.660 "strip_size_kb": 0, 00:33:00.660 "state": "online", 00:33:00.660 "raid_level": "raid1", 00:33:00.660 "superblock": true, 00:33:00.660 "num_base_bdevs": 2, 00:33:00.660 "num_base_bdevs_discovered": 2, 00:33:00.660 "num_base_bdevs_operational": 2, 00:33:00.660 "process": { 00:33:00.660 "type": "rebuild", 00:33:00.660 "target": "spare", 00:33:00.660 "progress": { 00:33:00.660 "blocks": 3072, 00:33:00.660 "percent": 38 00:33:00.660 } 00:33:00.660 }, 00:33:00.660 "base_bdevs_list": [ 00:33:00.660 { 00:33:00.660 "name": "spare", 00:33:00.660 "uuid": "2a2a6ef3-a571-5707-a050-2f991862e6c9", 00:33:00.660 "is_configured": true, 00:33:00.660 "data_offset": 256, 00:33:00.660 "data_size": 7936 00:33:00.660 }, 00:33:00.660 { 00:33:00.660 "name": "BaseBdev2", 00:33:00.660 "uuid": "46c1a120-58a3-5913-9de3-156d5b82020f", 00:33:00.660 "is_configured": true, 00:33:00.660 "data_offset": 256, 00:33:00.660 "data_size": 7936 00:33:00.660 } 00:33:00.660 ] 00:33:00.660 }' 00:33:00.974 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:00.974 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:00.974 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:00.974 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:00.974 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:00.974 [2024-07-23 04:28:09.732053] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:01.234 [2024-07-23 04:28:09.800150] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:01.234 [2024-07-23 04:28:09.800218] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:01.234 [2024-07-23 04:28:09.800239] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:01.234 [2024-07-23 04:28:09.800254] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:01.234 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:01.234 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:01.234 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:01.234 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:01.234 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:01.234 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:01.234 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:01.234 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:01.234 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:01.234 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:01.234 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:01.234 04:28:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:01.493 04:28:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:01.493 "name": "raid_bdev1", 00:33:01.493 "uuid": "906250d1-331c-484d-8b4c-05e1eab8581e", 00:33:01.493 "strip_size_kb": 0, 00:33:01.493 "state": "online", 00:33:01.493 "raid_level": "raid1", 00:33:01.493 "superblock": true, 00:33:01.493 "num_base_bdevs": 2, 00:33:01.493 "num_base_bdevs_discovered": 1, 00:33:01.493 "num_base_bdevs_operational": 1, 00:33:01.493 "base_bdevs_list": [ 00:33:01.493 { 00:33:01.493 "name": null, 00:33:01.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:01.493 "is_configured": false, 00:33:01.493 "data_offset": 256, 00:33:01.493 "data_size": 7936 00:33:01.493 }, 00:33:01.493 { 00:33:01.493 "name": "BaseBdev2", 00:33:01.493 "uuid": "46c1a120-58a3-5913-9de3-156d5b82020f", 00:33:01.493 "is_configured": true, 00:33:01.493 "data_offset": 256, 00:33:01.493 "data_size": 7936 00:33:01.493 } 00:33:01.493 ] 00:33:01.493 }' 00:33:01.493 04:28:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:01.493 04:28:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:02.060 04:28:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:02.320 [2024-07-23 04:28:10.863236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:02.320 [2024-07-23 04:28:10.863307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:02.320 [2024-07-23 04:28:10.863333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:33:02.320 [2024-07-23 04:28:10.863350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:02.320 [2024-07-23 04:28:10.863674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:02.320 [2024-07-23 04:28:10.863698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:02.320 [2024-07-23 04:28:10.863771] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:02.320 [2024-07-23 04:28:10.863792] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:02.320 [2024-07-23 04:28:10.863813] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:02.320 [2024-07-23 04:28:10.863854] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:02.320 [2024-07-23 04:28:10.886545] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9530 00:33:02.320 spare 00:33:02.320 [2024-07-23 04:28:10.888883] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:02.320 04:28:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # sleep 1 00:33:03.257 04:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:03.257 04:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:03.257 04:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:03.257 04:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:03.257 04:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:03.257 04:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:03.257 04:28:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:03.516 04:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:03.516 "name": "raid_bdev1", 00:33:03.516 "uuid": "906250d1-331c-484d-8b4c-05e1eab8581e", 00:33:03.516 "strip_size_kb": 0, 00:33:03.516 "state": "online", 00:33:03.516 "raid_level": "raid1", 00:33:03.516 "superblock": true, 00:33:03.516 "num_base_bdevs": 2, 00:33:03.516 "num_base_bdevs_discovered": 2, 00:33:03.516 "num_base_bdevs_operational": 2, 00:33:03.516 "process": { 00:33:03.516 "type": "rebuild", 00:33:03.516 "target": "spare", 00:33:03.516 "progress": { 00:33:03.516 "blocks": 3072, 00:33:03.516 "percent": 38 00:33:03.516 } 00:33:03.516 }, 00:33:03.516 "base_bdevs_list": [ 00:33:03.516 { 00:33:03.516 "name": "spare", 00:33:03.516 "uuid": "2a2a6ef3-a571-5707-a050-2f991862e6c9", 00:33:03.516 "is_configured": true, 00:33:03.516 "data_offset": 256, 00:33:03.516 "data_size": 7936 00:33:03.516 }, 00:33:03.516 { 00:33:03.516 "name": "BaseBdev2", 00:33:03.516 "uuid": "46c1a120-58a3-5913-9de3-156d5b82020f", 00:33:03.516 "is_configured": true, 00:33:03.516 "data_offset": 256, 00:33:03.516 "data_size": 7936 00:33:03.516 } 00:33:03.516 ] 00:33:03.516 }' 00:33:03.516 04:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:03.516 04:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:03.516 04:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:03.516 04:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:03.516 04:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:03.776 [2024-07-23 04:28:12.442956] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:03.776 [2024-07-23 04:28:12.502053] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:03.776 [2024-07-23 04:28:12.502114] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:03.776 [2024-07-23 04:28:12.502147] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:03.776 [2024-07-23 04:28:12.502161] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:03.776 04:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:03.776 04:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:03.776 04:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:03.776 04:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:03.776 04:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:03.776 04:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:03.776 04:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:03.776 04:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:03.776 04:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:03.776 04:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:03.776 04:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:03.776 04:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:04.035 04:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:04.035 "name": "raid_bdev1", 00:33:04.035 "uuid": "906250d1-331c-484d-8b4c-05e1eab8581e", 00:33:04.035 "strip_size_kb": 0, 00:33:04.035 "state": "online", 00:33:04.035 "raid_level": "raid1", 00:33:04.035 "superblock": true, 00:33:04.035 "num_base_bdevs": 2, 00:33:04.035 "num_base_bdevs_discovered": 1, 00:33:04.035 "num_base_bdevs_operational": 1, 00:33:04.035 "base_bdevs_list": [ 00:33:04.035 { 00:33:04.035 "name": null, 00:33:04.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:04.035 "is_configured": false, 00:33:04.035 "data_offset": 256, 00:33:04.035 "data_size": 7936 00:33:04.035 }, 00:33:04.035 { 00:33:04.035 "name": "BaseBdev2", 00:33:04.035 "uuid": "46c1a120-58a3-5913-9de3-156d5b82020f", 00:33:04.035 "is_configured": true, 00:33:04.035 "data_offset": 256, 00:33:04.035 "data_size": 7936 00:33:04.035 } 00:33:04.035 ] 00:33:04.035 }' 00:33:04.035 04:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:04.035 04:28:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:04.609 04:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:04.609 04:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:04.609 04:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:04.609 04:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:04.609 04:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:04.609 04:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:04.609 04:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:04.868 04:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:04.868 "name": "raid_bdev1", 00:33:04.868 "uuid": "906250d1-331c-484d-8b4c-05e1eab8581e", 00:33:04.868 "strip_size_kb": 0, 00:33:04.868 "state": "online", 00:33:04.868 "raid_level": "raid1", 00:33:04.868 "superblock": true, 00:33:04.868 "num_base_bdevs": 2, 00:33:04.868 "num_base_bdevs_discovered": 1, 00:33:04.868 "num_base_bdevs_operational": 1, 00:33:04.868 "base_bdevs_list": [ 00:33:04.868 { 00:33:04.868 "name": null, 00:33:04.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:04.868 "is_configured": false, 00:33:04.868 "data_offset": 256, 00:33:04.868 "data_size": 7936 00:33:04.868 }, 00:33:04.868 { 00:33:04.868 "name": "BaseBdev2", 00:33:04.868 "uuid": "46c1a120-58a3-5913-9de3-156d5b82020f", 00:33:04.868 "is_configured": true, 00:33:04.868 "data_offset": 256, 00:33:04.869 "data_size": 7936 00:33:04.869 } 00:33:04.869 ] 00:33:04.869 }' 00:33:04.869 04:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:04.869 04:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:04.869 04:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:05.127 04:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:05.127 04:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:33:05.387 04:28:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:05.387 [2024-07-23 04:28:14.121686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:05.387 [2024-07-23 04:28:14.121746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:05.387 [2024-07-23 04:28:14.121780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044480 00:33:05.387 [2024-07-23 04:28:14.121795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:05.387 [2024-07-23 04:28:14.122107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:05.387 [2024-07-23 04:28:14.122127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:05.387 [2024-07-23 04:28:14.122203] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:33:05.387 [2024-07-23 04:28:14.122225] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:05.387 [2024-07-23 04:28:14.122242] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:05.387 BaseBdev1 00:33:05.387 04:28:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # sleep 1 00:33:06.766 04:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:06.766 04:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:06.766 04:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:06.766 04:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:06.766 04:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:06.766 04:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:06.766 04:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:06.766 04:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:06.766 04:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:06.766 04:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:06.766 04:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:06.766 04:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:06.766 04:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:06.766 "name": "raid_bdev1", 00:33:06.766 "uuid": "906250d1-331c-484d-8b4c-05e1eab8581e", 00:33:06.766 "strip_size_kb": 0, 00:33:06.766 "state": "online", 00:33:06.766 "raid_level": "raid1", 00:33:06.766 "superblock": true, 00:33:06.766 "num_base_bdevs": 2, 00:33:06.766 "num_base_bdevs_discovered": 1, 00:33:06.766 "num_base_bdevs_operational": 1, 00:33:06.766 "base_bdevs_list": [ 00:33:06.766 { 00:33:06.766 "name": null, 00:33:06.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:06.766 "is_configured": false, 00:33:06.766 "data_offset": 256, 00:33:06.766 "data_size": 7936 00:33:06.766 }, 00:33:06.766 { 00:33:06.766 "name": "BaseBdev2", 00:33:06.766 "uuid": "46c1a120-58a3-5913-9de3-156d5b82020f", 00:33:06.766 "is_configured": true, 00:33:06.766 "data_offset": 256, 00:33:06.766 "data_size": 7936 00:33:06.766 } 00:33:06.766 ] 00:33:06.766 }' 00:33:06.766 04:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:06.766 04:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:07.334 04:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:07.334 04:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:07.334 04:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:07.334 04:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:07.334 04:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:07.334 04:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:07.334 04:28:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:07.594 04:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:07.594 "name": "raid_bdev1", 00:33:07.594 "uuid": "906250d1-331c-484d-8b4c-05e1eab8581e", 00:33:07.594 "strip_size_kb": 0, 00:33:07.594 "state": "online", 00:33:07.594 "raid_level": "raid1", 00:33:07.594 "superblock": true, 00:33:07.594 "num_base_bdevs": 2, 00:33:07.594 "num_base_bdevs_discovered": 1, 00:33:07.594 "num_base_bdevs_operational": 1, 00:33:07.594 "base_bdevs_list": [ 00:33:07.594 { 00:33:07.594 "name": null, 00:33:07.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:07.594 "is_configured": false, 00:33:07.594 "data_offset": 256, 00:33:07.594 "data_size": 7936 00:33:07.594 }, 00:33:07.594 { 00:33:07.594 "name": "BaseBdev2", 00:33:07.594 "uuid": "46c1a120-58a3-5913-9de3-156d5b82020f", 00:33:07.594 "is_configured": true, 00:33:07.594 "data_offset": 256, 00:33:07.594 "data_size": 7936 00:33:07.594 } 00:33:07.594 ] 00:33:07.594 }' 00:33:07.594 04:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:07.594 04:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:07.594 04:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:07.594 04:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:07.594 04:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:07.594 04:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:33:07.594 04:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:07.594 04:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:07.594 04:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:07.594 04:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:07.594 04:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:07.594 04:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:07.594 04:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:07.594 04:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:07.594 04:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:33:07.594 04:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:07.853 [2024-07-23 04:28:16.488132] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:07.853 [2024-07-23 04:28:16.488318] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:07.853 [2024-07-23 04:28:16.488338] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:07.854 request: 00:33:07.854 { 00:33:07.854 "base_bdev": "BaseBdev1", 00:33:07.854 "raid_bdev": "raid_bdev1", 00:33:07.854 "method": "bdev_raid_add_base_bdev", 00:33:07.854 "req_id": 1 00:33:07.854 } 00:33:07.854 Got JSON-RPC error response 00:33:07.854 response: 00:33:07.854 { 00:33:07.854 "code": -22, 00:33:07.854 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:33:07.854 } 00:33:07.854 04:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # es=1 00:33:07.854 04:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:07.854 04:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:07.854 04:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:07.854 04:28:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # sleep 1 00:33:08.791 04:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:08.791 04:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:08.791 04:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:08.791 04:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:08.791 04:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:08.791 04:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:08.791 04:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:08.791 04:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:08.791 04:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:08.791 04:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:08.791 04:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:08.791 04:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:09.050 04:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:09.050 "name": "raid_bdev1", 00:33:09.050 "uuid": "906250d1-331c-484d-8b4c-05e1eab8581e", 00:33:09.050 "strip_size_kb": 0, 00:33:09.050 "state": "online", 00:33:09.050 "raid_level": "raid1", 00:33:09.050 "superblock": true, 00:33:09.050 "num_base_bdevs": 2, 00:33:09.050 "num_base_bdevs_discovered": 1, 00:33:09.050 "num_base_bdevs_operational": 1, 00:33:09.050 "base_bdevs_list": [ 00:33:09.050 { 00:33:09.050 "name": null, 00:33:09.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:09.050 "is_configured": false, 00:33:09.050 "data_offset": 256, 00:33:09.050 "data_size": 7936 00:33:09.050 }, 00:33:09.050 { 00:33:09.050 "name": "BaseBdev2", 00:33:09.050 "uuid": "46c1a120-58a3-5913-9de3-156d5b82020f", 00:33:09.050 "is_configured": true, 00:33:09.050 "data_offset": 256, 00:33:09.050 "data_size": 7936 00:33:09.050 } 00:33:09.050 ] 00:33:09.050 }' 00:33:09.050 04:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:09.050 04:28:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:09.618 04:28:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:09.618 04:28:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:09.618 04:28:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:09.618 04:28:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:09.618 04:28:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:09.618 04:28:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:09.618 04:28:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:09.878 04:28:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:09.878 "name": "raid_bdev1", 00:33:09.878 "uuid": "906250d1-331c-484d-8b4c-05e1eab8581e", 00:33:09.878 "strip_size_kb": 0, 00:33:09.878 "state": "online", 00:33:09.878 "raid_level": "raid1", 00:33:09.878 "superblock": true, 00:33:09.878 "num_base_bdevs": 2, 00:33:09.878 "num_base_bdevs_discovered": 1, 00:33:09.878 "num_base_bdevs_operational": 1, 00:33:09.878 "base_bdevs_list": [ 00:33:09.878 { 00:33:09.878 "name": null, 00:33:09.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:09.878 "is_configured": false, 00:33:09.878 "data_offset": 256, 00:33:09.878 "data_size": 7936 00:33:09.878 }, 00:33:09.878 { 00:33:09.878 "name": "BaseBdev2", 00:33:09.878 "uuid": "46c1a120-58a3-5913-9de3-156d5b82020f", 00:33:09.878 "is_configured": true, 00:33:09.878 "data_offset": 256, 00:33:09.878 "data_size": 7936 00:33:09.878 } 00:33:09.878 ] 00:33:09.878 }' 00:33:09.878 04:28:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:09.878 04:28:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:09.878 04:28:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:09.878 04:28:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:09.878 04:28:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # killprocess 2819681 00:33:09.878 04:28:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 2819681 ']' 00:33:09.878 04:28:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 2819681 00:33:09.878 04:28:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:33:10.137 04:28:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:10.137 04:28:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2819681 00:33:10.137 04:28:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:10.137 04:28:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:10.137 04:28:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2819681' 00:33:10.137 killing process with pid 2819681 00:33:10.137 04:28:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 2819681 00:33:10.138 Received shutdown signal, test time was about 60.000000 seconds 00:33:10.138 00:33:10.138 Latency(us) 00:33:10.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.138 =================================================================================================================== 00:33:10.138 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:10.138 [2024-07-23 04:28:18.714281] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:10.138 04:28:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 2819681 00:33:10.138 [2024-07-23 04:28:18.714423] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:10.138 [2024-07-23 04:28:18.714482] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:10.138 [2024-07-23 04:28:18.714498] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043880 name raid_bdev1, state offline 00:33:10.397 [2024-07-23 04:28:19.176606] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:12.304 04:28:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # return 0 00:33:12.304 00:33:12.304 real 0m33.579s 00:33:12.304 user 0m51.583s 00:33:12.304 sys 0m5.266s 00:33:12.304 04:28:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:12.304 04:28:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:12.304 ************************************ 00:33:12.304 END TEST raid_rebuild_test_sb_md_separate 00:33:12.304 ************************************ 00:33:12.304 04:28:20 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:33:12.304 04:28:20 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:33:12.304 04:28:20 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:33:12.304 04:28:20 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:33:12.304 04:28:20 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:12.304 04:28:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:12.304 ************************************ 00:33:12.304 START TEST raid_state_function_test_sb_md_interleaved 00:33:12.304 ************************************ 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=2825678 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2825678' 00:33:12.304 Process raid pid: 2825678 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 2825678 /var/tmp/spdk-raid.sock 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 2825678 ']' 00:33:12.304 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:12.305 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:12.305 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:12.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:12.305 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:12.305 04:28:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:12.563 [2024-07-23 04:28:21.142915] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:33:12.563 [2024-07-23 04:28:21.143032] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:12.563 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.563 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:12.563 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.563 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:12.563 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.563 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:12.563 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.563 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:12.563 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.563 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:12.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.564 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:12.822 [2024-07-23 04:28:21.370236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.080 [2024-07-23 04:28:21.659873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.339 [2024-07-23 04:28:22.019507] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:13.339 [2024-07-23 04:28:22.019552] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:13.597 04:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:13.597 04:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:33:13.597 04:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:33:13.856 [2024-07-23 04:28:22.408419] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:13.856 [2024-07-23 04:28:22.408478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:13.856 [2024-07-23 04:28:22.408494] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:13.856 [2024-07-23 04:28:22.408510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:13.856 04:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:13.856 04:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:13.856 04:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:13.856 04:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:13.856 04:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:13.856 04:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:13.856 04:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:13.856 04:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:13.856 04:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:13.856 04:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:13.856 04:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:13.856 04:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:14.115 04:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:14.115 "name": "Existed_Raid", 00:33:14.115 "uuid": "8c4424bf-ee3a-440f-86e8-42f55725464e", 00:33:14.115 "strip_size_kb": 0, 00:33:14.115 "state": "configuring", 00:33:14.115 "raid_level": "raid1", 00:33:14.115 "superblock": true, 00:33:14.115 "num_base_bdevs": 2, 00:33:14.115 "num_base_bdevs_discovered": 0, 00:33:14.115 "num_base_bdevs_operational": 2, 00:33:14.115 "base_bdevs_list": [ 00:33:14.115 { 00:33:14.115 "name": "BaseBdev1", 00:33:14.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:14.115 "is_configured": false, 00:33:14.115 "data_offset": 0, 00:33:14.115 "data_size": 0 00:33:14.115 }, 00:33:14.115 { 00:33:14.115 "name": "BaseBdev2", 00:33:14.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:14.115 "is_configured": false, 00:33:14.115 "data_offset": 0, 00:33:14.115 "data_size": 0 00:33:14.115 } 00:33:14.115 ] 00:33:14.115 }' 00:33:14.115 04:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:14.115 04:28:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:14.684 04:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:14.684 [2024-07-23 04:28:23.427091] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:14.684 [2024-07-23 04:28:23.427131] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:33:14.684 04:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:33:14.944 [2024-07-23 04:28:23.639707] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:14.944 [2024-07-23 04:28:23.639750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:14.944 [2024-07-23 04:28:23.639764] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:14.944 [2024-07-23 04:28:23.639781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:14.944 04:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:33:15.254 [2024-07-23 04:28:23.921771] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:15.254 BaseBdev1 00:33:15.254 04:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:33:15.254 04:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:33:15.254 04:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:15.254 04:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:33:15.254 04:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:15.254 04:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:15.254 04:28:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:15.513 04:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:15.772 [ 00:33:15.772 { 00:33:15.772 "name": "BaseBdev1", 00:33:15.772 "aliases": [ 00:33:15.772 "5ecf89e1-5a26-4d1c-84c1-908c78476d11" 00:33:15.772 ], 00:33:15.772 "product_name": "Malloc disk", 00:33:15.772 "block_size": 4128, 00:33:15.772 "num_blocks": 8192, 00:33:15.772 "uuid": "5ecf89e1-5a26-4d1c-84c1-908c78476d11", 00:33:15.772 "md_size": 32, 00:33:15.772 "md_interleave": true, 00:33:15.772 "dif_type": 0, 00:33:15.772 "assigned_rate_limits": { 00:33:15.772 "rw_ios_per_sec": 0, 00:33:15.772 "rw_mbytes_per_sec": 0, 00:33:15.772 "r_mbytes_per_sec": 0, 00:33:15.772 "w_mbytes_per_sec": 0 00:33:15.772 }, 00:33:15.772 "claimed": true, 00:33:15.772 "claim_type": "exclusive_write", 00:33:15.772 "zoned": false, 00:33:15.772 "supported_io_types": { 00:33:15.772 "read": true, 00:33:15.772 "write": true, 00:33:15.772 "unmap": true, 00:33:15.772 "flush": true, 00:33:15.772 "reset": true, 00:33:15.772 "nvme_admin": false, 00:33:15.772 "nvme_io": false, 00:33:15.772 "nvme_io_md": false, 00:33:15.772 "write_zeroes": true, 00:33:15.772 "zcopy": true, 00:33:15.772 "get_zone_info": false, 00:33:15.772 "zone_management": false, 00:33:15.772 "zone_append": false, 00:33:15.772 "compare": false, 00:33:15.772 "compare_and_write": false, 00:33:15.772 "abort": true, 00:33:15.772 "seek_hole": false, 00:33:15.772 "seek_data": false, 00:33:15.772 "copy": true, 00:33:15.772 "nvme_iov_md": false 00:33:15.772 }, 00:33:15.772 "memory_domains": [ 00:33:15.772 { 00:33:15.772 "dma_device_id": "system", 00:33:15.772 "dma_device_type": 1 00:33:15.772 }, 00:33:15.772 { 00:33:15.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:15.772 "dma_device_type": 2 00:33:15.772 } 00:33:15.772 ], 00:33:15.772 "driver_specific": {} 00:33:15.772 } 00:33:15.772 ] 00:33:15.772 04:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:33:15.772 04:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:15.772 04:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:15.772 04:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:15.772 04:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:15.772 04:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:15.772 04:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:15.772 04:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:15.772 04:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:15.772 04:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:15.772 04:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:15.772 04:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:15.772 04:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:16.052 04:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:16.052 "name": "Existed_Raid", 00:33:16.052 "uuid": "cad9e9ee-5108-4eef-bd73-880824dff7b4", 00:33:16.052 "strip_size_kb": 0, 00:33:16.052 "state": "configuring", 00:33:16.052 "raid_level": "raid1", 00:33:16.052 "superblock": true, 00:33:16.052 "num_base_bdevs": 2, 00:33:16.052 "num_base_bdevs_discovered": 1, 00:33:16.052 "num_base_bdevs_operational": 2, 00:33:16.052 "base_bdevs_list": [ 00:33:16.052 { 00:33:16.052 "name": "BaseBdev1", 00:33:16.052 "uuid": "5ecf89e1-5a26-4d1c-84c1-908c78476d11", 00:33:16.052 "is_configured": true, 00:33:16.052 "data_offset": 256, 00:33:16.052 "data_size": 7936 00:33:16.052 }, 00:33:16.052 { 00:33:16.052 "name": "BaseBdev2", 00:33:16.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:16.052 "is_configured": false, 00:33:16.052 "data_offset": 0, 00:33:16.052 "data_size": 0 00:33:16.052 } 00:33:16.052 ] 00:33:16.052 }' 00:33:16.052 04:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:16.052 04:28:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:16.666 04:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:16.666 [2024-07-23 04:28:25.390208] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:16.666 [2024-07-23 04:28:25.390262] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:33:16.666 04:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:33:16.926 [2024-07-23 04:28:25.614873] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:16.926 [2024-07-23 04:28:25.617177] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:16.926 [2024-07-23 04:28:25.617220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:16.926 04:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:33:16.926 04:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:16.926 04:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:16.926 04:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:16.926 04:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:16.926 04:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:16.926 04:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:16.926 04:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:16.926 04:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:16.926 04:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:16.926 04:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:16.926 04:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:16.926 04:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:16.926 04:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:17.185 04:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:17.185 "name": "Existed_Raid", 00:33:17.185 "uuid": "324c530c-0ffb-4152-a869-8ec94a6feff4", 00:33:17.185 "strip_size_kb": 0, 00:33:17.185 "state": "configuring", 00:33:17.185 "raid_level": "raid1", 00:33:17.185 "superblock": true, 00:33:17.185 "num_base_bdevs": 2, 00:33:17.185 "num_base_bdevs_discovered": 1, 00:33:17.185 "num_base_bdevs_operational": 2, 00:33:17.185 "base_bdevs_list": [ 00:33:17.185 { 00:33:17.185 "name": "BaseBdev1", 00:33:17.185 "uuid": "5ecf89e1-5a26-4d1c-84c1-908c78476d11", 00:33:17.185 "is_configured": true, 00:33:17.185 "data_offset": 256, 00:33:17.185 "data_size": 7936 00:33:17.185 }, 00:33:17.185 { 00:33:17.185 "name": "BaseBdev2", 00:33:17.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:17.185 "is_configured": false, 00:33:17.185 "data_offset": 0, 00:33:17.185 "data_size": 0 00:33:17.185 } 00:33:17.185 ] 00:33:17.185 }' 00:33:17.185 04:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:17.185 04:28:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:17.753 04:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:33:18.012 [2024-07-23 04:28:26.620609] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:18.012 [2024-07-23 04:28:26.620841] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:33:18.012 [2024-07-23 04:28:26.620860] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:18.012 [2024-07-23 04:28:26.620956] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:33:18.012 [2024-07-23 04:28:26.621093] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:33:18.012 [2024-07-23 04:28:26.621111] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:33:18.012 [2024-07-23 04:28:26.621216] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:18.012 BaseBdev2 00:33:18.012 04:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:33:18.012 04:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:33:18.012 04:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:18.012 04:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:33:18.012 04:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:18.012 04:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:18.012 04:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:18.271 04:28:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:18.531 [ 00:33:18.531 { 00:33:18.531 "name": "BaseBdev2", 00:33:18.531 "aliases": [ 00:33:18.531 "2cebd398-3ae8-4c49-83bf-b88ca9ceecc6" 00:33:18.531 ], 00:33:18.531 "product_name": "Malloc disk", 00:33:18.531 "block_size": 4128, 00:33:18.531 "num_blocks": 8192, 00:33:18.531 "uuid": "2cebd398-3ae8-4c49-83bf-b88ca9ceecc6", 00:33:18.531 "md_size": 32, 00:33:18.531 "md_interleave": true, 00:33:18.531 "dif_type": 0, 00:33:18.531 "assigned_rate_limits": { 00:33:18.531 "rw_ios_per_sec": 0, 00:33:18.531 "rw_mbytes_per_sec": 0, 00:33:18.531 "r_mbytes_per_sec": 0, 00:33:18.531 "w_mbytes_per_sec": 0 00:33:18.531 }, 00:33:18.531 "claimed": true, 00:33:18.531 "claim_type": "exclusive_write", 00:33:18.531 "zoned": false, 00:33:18.531 "supported_io_types": { 00:33:18.531 "read": true, 00:33:18.531 "write": true, 00:33:18.531 "unmap": true, 00:33:18.531 "flush": true, 00:33:18.531 "reset": true, 00:33:18.531 "nvme_admin": false, 00:33:18.531 "nvme_io": false, 00:33:18.531 "nvme_io_md": false, 00:33:18.531 "write_zeroes": true, 00:33:18.531 "zcopy": true, 00:33:18.531 "get_zone_info": false, 00:33:18.531 "zone_management": false, 00:33:18.531 "zone_append": false, 00:33:18.531 "compare": false, 00:33:18.531 "compare_and_write": false, 00:33:18.531 "abort": true, 00:33:18.531 "seek_hole": false, 00:33:18.531 "seek_data": false, 00:33:18.531 "copy": true, 00:33:18.531 "nvme_iov_md": false 00:33:18.531 }, 00:33:18.531 "memory_domains": [ 00:33:18.531 { 00:33:18.531 "dma_device_id": "system", 00:33:18.531 "dma_device_type": 1 00:33:18.531 }, 00:33:18.531 { 00:33:18.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:18.531 "dma_device_type": 2 00:33:18.531 } 00:33:18.531 ], 00:33:18.531 "driver_specific": {} 00:33:18.531 } 00:33:18.531 ] 00:33:18.531 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:33:18.531 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:33:18.531 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:18.531 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:33:18.531 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:18.531 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:18.531 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:18.531 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:18.531 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:18.531 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:18.531 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:18.531 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:18.531 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:18.531 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:18.531 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:18.791 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:18.791 "name": "Existed_Raid", 00:33:18.791 "uuid": "324c530c-0ffb-4152-a869-8ec94a6feff4", 00:33:18.791 "strip_size_kb": 0, 00:33:18.791 "state": "online", 00:33:18.791 "raid_level": "raid1", 00:33:18.791 "superblock": true, 00:33:18.791 "num_base_bdevs": 2, 00:33:18.791 "num_base_bdevs_discovered": 2, 00:33:18.791 "num_base_bdevs_operational": 2, 00:33:18.791 "base_bdevs_list": [ 00:33:18.791 { 00:33:18.791 "name": "BaseBdev1", 00:33:18.791 "uuid": "5ecf89e1-5a26-4d1c-84c1-908c78476d11", 00:33:18.791 "is_configured": true, 00:33:18.791 "data_offset": 256, 00:33:18.791 "data_size": 7936 00:33:18.791 }, 00:33:18.791 { 00:33:18.791 "name": "BaseBdev2", 00:33:18.791 "uuid": "2cebd398-3ae8-4c49-83bf-b88ca9ceecc6", 00:33:18.791 "is_configured": true, 00:33:18.791 "data_offset": 256, 00:33:18.791 "data_size": 7936 00:33:18.791 } 00:33:18.791 ] 00:33:18.791 }' 00:33:18.791 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:18.791 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:19.359 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:33:19.359 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:33:19.359 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:19.359 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:19.359 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:19.359 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:33:19.359 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:33:19.359 04:28:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:19.359 [2024-07-23 04:28:28.036867] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:19.359 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:19.359 "name": "Existed_Raid", 00:33:19.359 "aliases": [ 00:33:19.359 "324c530c-0ffb-4152-a869-8ec94a6feff4" 00:33:19.359 ], 00:33:19.359 "product_name": "Raid Volume", 00:33:19.359 "block_size": 4128, 00:33:19.359 "num_blocks": 7936, 00:33:19.359 "uuid": "324c530c-0ffb-4152-a869-8ec94a6feff4", 00:33:19.359 "md_size": 32, 00:33:19.359 "md_interleave": true, 00:33:19.359 "dif_type": 0, 00:33:19.359 "assigned_rate_limits": { 00:33:19.359 "rw_ios_per_sec": 0, 00:33:19.359 "rw_mbytes_per_sec": 0, 00:33:19.359 "r_mbytes_per_sec": 0, 00:33:19.359 "w_mbytes_per_sec": 0 00:33:19.359 }, 00:33:19.359 "claimed": false, 00:33:19.359 "zoned": false, 00:33:19.359 "supported_io_types": { 00:33:19.359 "read": true, 00:33:19.359 "write": true, 00:33:19.359 "unmap": false, 00:33:19.359 "flush": false, 00:33:19.359 "reset": true, 00:33:19.359 "nvme_admin": false, 00:33:19.359 "nvme_io": false, 00:33:19.359 "nvme_io_md": false, 00:33:19.359 "write_zeroes": true, 00:33:19.359 "zcopy": false, 00:33:19.359 "get_zone_info": false, 00:33:19.359 "zone_management": false, 00:33:19.359 "zone_append": false, 00:33:19.359 "compare": false, 00:33:19.359 "compare_and_write": false, 00:33:19.359 "abort": false, 00:33:19.359 "seek_hole": false, 00:33:19.359 "seek_data": false, 00:33:19.359 "copy": false, 00:33:19.359 "nvme_iov_md": false 00:33:19.359 }, 00:33:19.359 "memory_domains": [ 00:33:19.359 { 00:33:19.359 "dma_device_id": "system", 00:33:19.359 "dma_device_type": 1 00:33:19.359 }, 00:33:19.359 { 00:33:19.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:19.359 "dma_device_type": 2 00:33:19.359 }, 00:33:19.359 { 00:33:19.359 "dma_device_id": "system", 00:33:19.359 "dma_device_type": 1 00:33:19.359 }, 00:33:19.359 { 00:33:19.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:19.359 "dma_device_type": 2 00:33:19.359 } 00:33:19.359 ], 00:33:19.359 "driver_specific": { 00:33:19.359 "raid": { 00:33:19.359 "uuid": "324c530c-0ffb-4152-a869-8ec94a6feff4", 00:33:19.359 "strip_size_kb": 0, 00:33:19.359 "state": "online", 00:33:19.359 "raid_level": "raid1", 00:33:19.359 "superblock": true, 00:33:19.359 "num_base_bdevs": 2, 00:33:19.359 "num_base_bdevs_discovered": 2, 00:33:19.359 "num_base_bdevs_operational": 2, 00:33:19.359 "base_bdevs_list": [ 00:33:19.359 { 00:33:19.359 "name": "BaseBdev1", 00:33:19.359 "uuid": "5ecf89e1-5a26-4d1c-84c1-908c78476d11", 00:33:19.359 "is_configured": true, 00:33:19.359 "data_offset": 256, 00:33:19.359 "data_size": 7936 00:33:19.359 }, 00:33:19.359 { 00:33:19.359 "name": "BaseBdev2", 00:33:19.359 "uuid": "2cebd398-3ae8-4c49-83bf-b88ca9ceecc6", 00:33:19.359 "is_configured": true, 00:33:19.359 "data_offset": 256, 00:33:19.359 "data_size": 7936 00:33:19.359 } 00:33:19.359 ] 00:33:19.359 } 00:33:19.359 } 00:33:19.359 }' 00:33:19.359 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:19.359 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:33:19.359 BaseBdev2' 00:33:19.359 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:19.359 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:33:19.359 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:19.619 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:19.619 "name": "BaseBdev1", 00:33:19.619 "aliases": [ 00:33:19.619 "5ecf89e1-5a26-4d1c-84c1-908c78476d11" 00:33:19.619 ], 00:33:19.619 "product_name": "Malloc disk", 00:33:19.619 "block_size": 4128, 00:33:19.619 "num_blocks": 8192, 00:33:19.619 "uuid": "5ecf89e1-5a26-4d1c-84c1-908c78476d11", 00:33:19.619 "md_size": 32, 00:33:19.619 "md_interleave": true, 00:33:19.619 "dif_type": 0, 00:33:19.619 "assigned_rate_limits": { 00:33:19.619 "rw_ios_per_sec": 0, 00:33:19.619 "rw_mbytes_per_sec": 0, 00:33:19.619 "r_mbytes_per_sec": 0, 00:33:19.619 "w_mbytes_per_sec": 0 00:33:19.619 }, 00:33:19.619 "claimed": true, 00:33:19.619 "claim_type": "exclusive_write", 00:33:19.619 "zoned": false, 00:33:19.619 "supported_io_types": { 00:33:19.619 "read": true, 00:33:19.619 "write": true, 00:33:19.619 "unmap": true, 00:33:19.619 "flush": true, 00:33:19.619 "reset": true, 00:33:19.619 "nvme_admin": false, 00:33:19.619 "nvme_io": false, 00:33:19.619 "nvme_io_md": false, 00:33:19.619 "write_zeroes": true, 00:33:19.619 "zcopy": true, 00:33:19.619 "get_zone_info": false, 00:33:19.619 "zone_management": false, 00:33:19.619 "zone_append": false, 00:33:19.619 "compare": false, 00:33:19.619 "compare_and_write": false, 00:33:19.619 "abort": true, 00:33:19.619 "seek_hole": false, 00:33:19.619 "seek_data": false, 00:33:19.619 "copy": true, 00:33:19.619 "nvme_iov_md": false 00:33:19.619 }, 00:33:19.619 "memory_domains": [ 00:33:19.619 { 00:33:19.619 "dma_device_id": "system", 00:33:19.619 "dma_device_type": 1 00:33:19.619 }, 00:33:19.619 { 00:33:19.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:19.619 "dma_device_type": 2 00:33:19.619 } 00:33:19.619 ], 00:33:19.619 "driver_specific": {} 00:33:19.619 }' 00:33:19.619 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:19.619 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:19.877 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:19.877 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:19.877 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:19.877 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:19.877 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:19.877 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:19.877 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:19.877 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:19.877 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:20.134 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:20.134 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:20.134 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:20.134 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:33:20.134 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:20.134 "name": "BaseBdev2", 00:33:20.134 "aliases": [ 00:33:20.134 "2cebd398-3ae8-4c49-83bf-b88ca9ceecc6" 00:33:20.134 ], 00:33:20.134 "product_name": "Malloc disk", 00:33:20.134 "block_size": 4128, 00:33:20.134 "num_blocks": 8192, 00:33:20.134 "uuid": "2cebd398-3ae8-4c49-83bf-b88ca9ceecc6", 00:33:20.134 "md_size": 32, 00:33:20.134 "md_interleave": true, 00:33:20.134 "dif_type": 0, 00:33:20.134 "assigned_rate_limits": { 00:33:20.134 "rw_ios_per_sec": 0, 00:33:20.134 "rw_mbytes_per_sec": 0, 00:33:20.134 "r_mbytes_per_sec": 0, 00:33:20.134 "w_mbytes_per_sec": 0 00:33:20.134 }, 00:33:20.134 "claimed": true, 00:33:20.134 "claim_type": "exclusive_write", 00:33:20.134 "zoned": false, 00:33:20.134 "supported_io_types": { 00:33:20.134 "read": true, 00:33:20.134 "write": true, 00:33:20.134 "unmap": true, 00:33:20.134 "flush": true, 00:33:20.134 "reset": true, 00:33:20.134 "nvme_admin": false, 00:33:20.134 "nvme_io": false, 00:33:20.134 "nvme_io_md": false, 00:33:20.134 "write_zeroes": true, 00:33:20.134 "zcopy": true, 00:33:20.134 "get_zone_info": false, 00:33:20.134 "zone_management": false, 00:33:20.134 "zone_append": false, 00:33:20.135 "compare": false, 00:33:20.135 "compare_and_write": false, 00:33:20.135 "abort": true, 00:33:20.135 "seek_hole": false, 00:33:20.135 "seek_data": false, 00:33:20.135 "copy": true, 00:33:20.135 "nvme_iov_md": false 00:33:20.135 }, 00:33:20.135 "memory_domains": [ 00:33:20.135 { 00:33:20.135 "dma_device_id": "system", 00:33:20.135 "dma_device_type": 1 00:33:20.135 }, 00:33:20.135 { 00:33:20.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:20.135 "dma_device_type": 2 00:33:20.135 } 00:33:20.135 ], 00:33:20.135 "driver_specific": {} 00:33:20.135 }' 00:33:20.135 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:20.393 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:20.393 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:20.393 04:28:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:20.393 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:20.393 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:20.393 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:20.393 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:20.393 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:20.393 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:20.651 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:20.651 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:20.651 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:20.910 [2024-07-23 04:28:29.448447] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:20.910 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:33:20.910 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:33:20.910 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:33:20.910 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:33:20.910 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:33:20.910 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:33:20.910 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:20.910 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:20.910 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:20.910 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:20.910 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:20.910 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:20.910 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:20.910 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:20.910 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:20.910 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:20.910 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:21.169 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:21.169 "name": "Existed_Raid", 00:33:21.169 "uuid": "324c530c-0ffb-4152-a869-8ec94a6feff4", 00:33:21.169 "strip_size_kb": 0, 00:33:21.169 "state": "online", 00:33:21.169 "raid_level": "raid1", 00:33:21.169 "superblock": true, 00:33:21.169 "num_base_bdevs": 2, 00:33:21.169 "num_base_bdevs_discovered": 1, 00:33:21.169 "num_base_bdevs_operational": 1, 00:33:21.169 "base_bdevs_list": [ 00:33:21.169 { 00:33:21.169 "name": null, 00:33:21.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:21.169 "is_configured": false, 00:33:21.169 "data_offset": 256, 00:33:21.169 "data_size": 7936 00:33:21.169 }, 00:33:21.169 { 00:33:21.169 "name": "BaseBdev2", 00:33:21.169 "uuid": "2cebd398-3ae8-4c49-83bf-b88ca9ceecc6", 00:33:21.169 "is_configured": true, 00:33:21.169 "data_offset": 256, 00:33:21.169 "data_size": 7936 00:33:21.169 } 00:33:21.169 ] 00:33:21.169 }' 00:33:21.169 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:21.169 04:28:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:21.736 04:28:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:33:21.736 04:28:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:21.736 04:28:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:21.736 04:28:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:33:21.994 04:28:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:33:21.994 04:28:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:21.994 04:28:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:33:21.994 [2024-07-23 04:28:30.752496] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:21.994 [2024-07-23 04:28:30.752621] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:22.253 [2024-07-23 04:28:30.883481] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:22.253 [2024-07-23 04:28:30.883533] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:22.253 [2024-07-23 04:28:30.883552] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:33:22.253 04:28:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:33:22.253 04:28:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:22.253 04:28:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:22.253 04:28:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:33:22.512 04:28:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:33:22.512 04:28:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:33:22.512 04:28:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:33:22.512 04:28:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 2825678 00:33:22.512 04:28:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 2825678 ']' 00:33:22.512 04:28:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 2825678 00:33:22.512 04:28:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:33:22.512 04:28:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:22.512 04:28:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2825678 00:33:22.512 04:28:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:22.512 04:28:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:22.512 04:28:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2825678' 00:33:22.512 killing process with pid 2825678 00:33:22.512 04:28:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 2825678 00:33:22.512 [2024-07-23 04:28:31.194792] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:22.512 04:28:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 2825678 00:33:22.512 [2024-07-23 04:28:31.218534] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:24.416 04:28:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:33:24.416 00:33:24.416 real 0m11.880s 00:33:24.416 user 0m19.342s 00:33:24.416 sys 0m2.070s 00:33:24.416 04:28:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:24.416 04:28:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:24.416 ************************************ 00:33:24.416 END TEST raid_state_function_test_sb_md_interleaved 00:33:24.417 ************************************ 00:33:24.417 04:28:32 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:33:24.417 04:28:32 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:33:24.417 04:28:32 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:33:24.417 04:28:32 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:24.417 04:28:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:24.417 ************************************ 00:33:24.417 START TEST raid_superblock_test_md_interleaved 00:33:24.417 ************************************ 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=2827810 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 2827810 /var/tmp/spdk-raid.sock 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 2827810 ']' 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:24.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:24.417 04:28:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:24.417 [2024-07-23 04:28:33.114661] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:33:24.417 [2024-07-23 04:28:33.114777] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2827810 ] 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:24.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.676 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:24.676 [2024-07-23 04:28:33.341054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.935 [2024-07-23 04:28:33.634549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:25.504 [2024-07-23 04:28:33.991181] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:25.504 [2024-07-23 04:28:33.991213] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:25.504 04:28:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:25.504 04:28:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:33:25.504 04:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:33:25.504 04:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:25.504 04:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:33:25.504 04:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:33:25.504 04:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:33:25.504 04:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:25.504 04:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:33:25.504 04:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:25.504 04:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:33:25.763 malloc1 00:33:25.764 04:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:26.023 [2024-07-23 04:28:34.660019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:26.023 [2024-07-23 04:28:34.660076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:26.023 [2024-07-23 04:28:34.660106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:33:26.023 [2024-07-23 04:28:34.660123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:26.023 [2024-07-23 04:28:34.662516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:26.023 [2024-07-23 04:28:34.662550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:26.023 pt1 00:33:26.023 04:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:33:26.023 04:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:26.023 04:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:33:26.023 04:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:33:26.023 04:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:33:26.023 04:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:26.023 04:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:33:26.023 04:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:26.023 04:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:33:26.282 malloc2 00:33:26.282 04:28:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:26.541 [2024-07-23 04:28:35.110885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:26.541 [2024-07-23 04:28:35.110936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:26.541 [2024-07-23 04:28:35.110962] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:33:26.541 [2024-07-23 04:28:35.110978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:26.541 [2024-07-23 04:28:35.113360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:26.541 [2024-07-23 04:28:35.113399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:26.541 pt2 00:33:26.541 04:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:33:26.541 04:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:26.541 04:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:33:26.801 [2024-07-23 04:28:35.339508] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:26.801 [2024-07-23 04:28:35.341806] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:26.801 [2024-07-23 04:28:35.342055] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040880 00:33:26.801 [2024-07-23 04:28:35.342075] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:26.801 [2024-07-23 04:28:35.342183] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:33:26.801 [2024-07-23 04:28:35.342310] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040880 00:33:26.801 [2024-07-23 04:28:35.342327] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040880 00:33:26.801 [2024-07-23 04:28:35.342417] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:26.801 04:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:26.801 04:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:26.801 04:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:26.801 04:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:26.801 04:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:26.801 04:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:26.801 04:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:26.801 04:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:26.801 04:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:26.801 04:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:26.801 04:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:26.801 04:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:27.061 04:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:27.061 "name": "raid_bdev1", 00:33:27.061 "uuid": "68c59678-0068-4e36-a845-a3a3b2415f43", 00:33:27.061 "strip_size_kb": 0, 00:33:27.061 "state": "online", 00:33:27.061 "raid_level": "raid1", 00:33:27.061 "superblock": true, 00:33:27.061 "num_base_bdevs": 2, 00:33:27.061 "num_base_bdevs_discovered": 2, 00:33:27.061 "num_base_bdevs_operational": 2, 00:33:27.061 "base_bdevs_list": [ 00:33:27.061 { 00:33:27.061 "name": "pt1", 00:33:27.061 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:27.061 "is_configured": true, 00:33:27.061 "data_offset": 256, 00:33:27.061 "data_size": 7936 00:33:27.061 }, 00:33:27.061 { 00:33:27.061 "name": "pt2", 00:33:27.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:27.061 "is_configured": true, 00:33:27.061 "data_offset": 256, 00:33:27.061 "data_size": 7936 00:33:27.061 } 00:33:27.061 ] 00:33:27.061 }' 00:33:27.061 04:28:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:27.061 04:28:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:27.629 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:33:27.629 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:33:27.629 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:27.629 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:27.630 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:27.630 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:33:27.630 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:27.630 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:27.630 [2024-07-23 04:28:36.306467] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:27.630 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:27.630 "name": "raid_bdev1", 00:33:27.630 "aliases": [ 00:33:27.630 "68c59678-0068-4e36-a845-a3a3b2415f43" 00:33:27.630 ], 00:33:27.630 "product_name": "Raid Volume", 00:33:27.630 "block_size": 4128, 00:33:27.630 "num_blocks": 7936, 00:33:27.630 "uuid": "68c59678-0068-4e36-a845-a3a3b2415f43", 00:33:27.630 "md_size": 32, 00:33:27.630 "md_interleave": true, 00:33:27.630 "dif_type": 0, 00:33:27.630 "assigned_rate_limits": { 00:33:27.630 "rw_ios_per_sec": 0, 00:33:27.630 "rw_mbytes_per_sec": 0, 00:33:27.630 "r_mbytes_per_sec": 0, 00:33:27.630 "w_mbytes_per_sec": 0 00:33:27.630 }, 00:33:27.630 "claimed": false, 00:33:27.630 "zoned": false, 00:33:27.630 "supported_io_types": { 00:33:27.630 "read": true, 00:33:27.630 "write": true, 00:33:27.630 "unmap": false, 00:33:27.630 "flush": false, 00:33:27.630 "reset": true, 00:33:27.630 "nvme_admin": false, 00:33:27.630 "nvme_io": false, 00:33:27.630 "nvme_io_md": false, 00:33:27.630 "write_zeroes": true, 00:33:27.630 "zcopy": false, 00:33:27.630 "get_zone_info": false, 00:33:27.630 "zone_management": false, 00:33:27.630 "zone_append": false, 00:33:27.630 "compare": false, 00:33:27.630 "compare_and_write": false, 00:33:27.630 "abort": false, 00:33:27.630 "seek_hole": false, 00:33:27.630 "seek_data": false, 00:33:27.630 "copy": false, 00:33:27.630 "nvme_iov_md": false 00:33:27.630 }, 00:33:27.630 "memory_domains": [ 00:33:27.630 { 00:33:27.630 "dma_device_id": "system", 00:33:27.630 "dma_device_type": 1 00:33:27.630 }, 00:33:27.630 { 00:33:27.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:27.630 "dma_device_type": 2 00:33:27.630 }, 00:33:27.630 { 00:33:27.630 "dma_device_id": "system", 00:33:27.630 "dma_device_type": 1 00:33:27.630 }, 00:33:27.630 { 00:33:27.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:27.630 "dma_device_type": 2 00:33:27.630 } 00:33:27.630 ], 00:33:27.630 "driver_specific": { 00:33:27.630 "raid": { 00:33:27.630 "uuid": "68c59678-0068-4e36-a845-a3a3b2415f43", 00:33:27.630 "strip_size_kb": 0, 00:33:27.630 "state": "online", 00:33:27.630 "raid_level": "raid1", 00:33:27.630 "superblock": true, 00:33:27.630 "num_base_bdevs": 2, 00:33:27.630 "num_base_bdevs_discovered": 2, 00:33:27.630 "num_base_bdevs_operational": 2, 00:33:27.630 "base_bdevs_list": [ 00:33:27.630 { 00:33:27.630 "name": "pt1", 00:33:27.630 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:27.630 "is_configured": true, 00:33:27.630 "data_offset": 256, 00:33:27.630 "data_size": 7936 00:33:27.630 }, 00:33:27.630 { 00:33:27.630 "name": "pt2", 00:33:27.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:27.630 "is_configured": true, 00:33:27.630 "data_offset": 256, 00:33:27.630 "data_size": 7936 00:33:27.630 } 00:33:27.630 ] 00:33:27.630 } 00:33:27.630 } 00:33:27.630 }' 00:33:27.630 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:27.630 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:33:27.630 pt2' 00:33:27.630 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:27.630 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:33:27.630 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:27.890 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:27.890 "name": "pt1", 00:33:27.890 "aliases": [ 00:33:27.890 "00000000-0000-0000-0000-000000000001" 00:33:27.890 ], 00:33:27.890 "product_name": "passthru", 00:33:27.890 "block_size": 4128, 00:33:27.890 "num_blocks": 8192, 00:33:27.890 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:27.890 "md_size": 32, 00:33:27.890 "md_interleave": true, 00:33:27.890 "dif_type": 0, 00:33:27.890 "assigned_rate_limits": { 00:33:27.890 "rw_ios_per_sec": 0, 00:33:27.890 "rw_mbytes_per_sec": 0, 00:33:27.890 "r_mbytes_per_sec": 0, 00:33:27.890 "w_mbytes_per_sec": 0 00:33:27.890 }, 00:33:27.890 "claimed": true, 00:33:27.890 "claim_type": "exclusive_write", 00:33:27.890 "zoned": false, 00:33:27.890 "supported_io_types": { 00:33:27.890 "read": true, 00:33:27.890 "write": true, 00:33:27.890 "unmap": true, 00:33:27.890 "flush": true, 00:33:27.890 "reset": true, 00:33:27.890 "nvme_admin": false, 00:33:27.890 "nvme_io": false, 00:33:27.890 "nvme_io_md": false, 00:33:27.890 "write_zeroes": true, 00:33:27.890 "zcopy": true, 00:33:27.890 "get_zone_info": false, 00:33:27.890 "zone_management": false, 00:33:27.890 "zone_append": false, 00:33:27.890 "compare": false, 00:33:27.890 "compare_and_write": false, 00:33:27.890 "abort": true, 00:33:27.890 "seek_hole": false, 00:33:27.890 "seek_data": false, 00:33:27.890 "copy": true, 00:33:27.890 "nvme_iov_md": false 00:33:27.890 }, 00:33:27.890 "memory_domains": [ 00:33:27.890 { 00:33:27.890 "dma_device_id": "system", 00:33:27.890 "dma_device_type": 1 00:33:27.890 }, 00:33:27.890 { 00:33:27.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:27.890 "dma_device_type": 2 00:33:27.890 } 00:33:27.890 ], 00:33:27.890 "driver_specific": { 00:33:27.890 "passthru": { 00:33:27.890 "name": "pt1", 00:33:27.890 "base_bdev_name": "malloc1" 00:33:27.890 } 00:33:27.890 } 00:33:27.890 }' 00:33:27.890 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:27.890 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:27.890 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:27.890 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:27.890 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:28.149 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:28.149 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:28.149 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:28.149 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:28.149 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:28.149 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:28.149 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:28.149 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:28.149 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:33:28.149 04:28:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:28.408 04:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:28.408 "name": "pt2", 00:33:28.408 "aliases": [ 00:33:28.408 "00000000-0000-0000-0000-000000000002" 00:33:28.408 ], 00:33:28.408 "product_name": "passthru", 00:33:28.408 "block_size": 4128, 00:33:28.408 "num_blocks": 8192, 00:33:28.408 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:28.408 "md_size": 32, 00:33:28.408 "md_interleave": true, 00:33:28.408 "dif_type": 0, 00:33:28.408 "assigned_rate_limits": { 00:33:28.409 "rw_ios_per_sec": 0, 00:33:28.409 "rw_mbytes_per_sec": 0, 00:33:28.409 "r_mbytes_per_sec": 0, 00:33:28.409 "w_mbytes_per_sec": 0 00:33:28.409 }, 00:33:28.409 "claimed": true, 00:33:28.409 "claim_type": "exclusive_write", 00:33:28.409 "zoned": false, 00:33:28.409 "supported_io_types": { 00:33:28.409 "read": true, 00:33:28.409 "write": true, 00:33:28.409 "unmap": true, 00:33:28.409 "flush": true, 00:33:28.409 "reset": true, 00:33:28.409 "nvme_admin": false, 00:33:28.409 "nvme_io": false, 00:33:28.409 "nvme_io_md": false, 00:33:28.409 "write_zeroes": true, 00:33:28.409 "zcopy": true, 00:33:28.409 "get_zone_info": false, 00:33:28.409 "zone_management": false, 00:33:28.409 "zone_append": false, 00:33:28.409 "compare": false, 00:33:28.409 "compare_and_write": false, 00:33:28.409 "abort": true, 00:33:28.409 "seek_hole": false, 00:33:28.409 "seek_data": false, 00:33:28.409 "copy": true, 00:33:28.409 "nvme_iov_md": false 00:33:28.409 }, 00:33:28.409 "memory_domains": [ 00:33:28.409 { 00:33:28.409 "dma_device_id": "system", 00:33:28.409 "dma_device_type": 1 00:33:28.409 }, 00:33:28.409 { 00:33:28.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:28.409 "dma_device_type": 2 00:33:28.409 } 00:33:28.409 ], 00:33:28.409 "driver_specific": { 00:33:28.409 "passthru": { 00:33:28.409 "name": "pt2", 00:33:28.409 "base_bdev_name": "malloc2" 00:33:28.409 } 00:33:28.409 } 00:33:28.409 }' 00:33:28.409 04:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:28.409 04:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:28.409 04:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:28.409 04:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:28.668 04:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:28.668 04:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:28.668 04:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:28.668 04:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:28.668 04:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:28.668 04:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:28.668 04:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:28.927 04:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:28.928 04:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:33:28.928 04:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:28.928 [2024-07-23 04:28:37.605969] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:28.928 04:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=68c59678-0068-4e36-a845-a3a3b2415f43 00:33:28.928 04:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z 68c59678-0068-4e36-a845-a3a3b2415f43 ']' 00:33:28.928 04:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:29.187 [2024-07-23 04:28:37.774070] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:29.187 [2024-07-23 04:28:37.774103] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:29.187 [2024-07-23 04:28:37.774198] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:29.187 [2024-07-23 04:28:37.774271] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:29.187 [2024-07-23 04:28:37.774294] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040880 name raid_bdev1, state offline 00:33:29.187 04:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:29.187 04:28:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:33:29.446 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:33:29.446 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:33:29.446 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:33:29.446 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:33:29.706 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:33:29.706 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:29.966 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:33:29.966 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:33:29.966 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:33:29.966 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:33:29.966 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:33:29.966 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:33:29.966 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:29.966 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:29.966 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:29.966 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:29.966 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:29.966 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:29.966 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:29.966 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:33:29.966 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:33:30.225 [2024-07-23 04:28:38.933351] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:33:30.225 [2024-07-23 04:28:38.935715] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:33:30.225 [2024-07-23 04:28:38.935798] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:33:30.225 [2024-07-23 04:28:38.935854] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:33:30.226 [2024-07-23 04:28:38.935878] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:30.226 [2024-07-23 04:28:38.935896] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state configuring 00:33:30.226 request: 00:33:30.226 { 00:33:30.226 "name": "raid_bdev1", 00:33:30.226 "raid_level": "raid1", 00:33:30.226 "base_bdevs": [ 00:33:30.226 "malloc1", 00:33:30.226 "malloc2" 00:33:30.226 ], 00:33:30.226 "superblock": false, 00:33:30.226 "method": "bdev_raid_create", 00:33:30.226 "req_id": 1 00:33:30.226 } 00:33:30.226 Got JSON-RPC error response 00:33:30.226 response: 00:33:30.226 { 00:33:30.226 "code": -17, 00:33:30.226 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:33:30.226 } 00:33:30.226 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:33:30.226 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:30.226 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:30.226 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:30.226 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:30.226 04:28:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:33:30.485 04:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:33:30.485 04:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:33:30.485 04:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:30.752 [2024-07-23 04:28:39.382498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:30.752 [2024-07-23 04:28:39.382570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:30.752 [2024-07-23 04:28:39.382596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:33:30.752 [2024-07-23 04:28:39.382615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:30.752 [2024-07-23 04:28:39.385090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:30.752 [2024-07-23 04:28:39.385148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:30.752 [2024-07-23 04:28:39.385219] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:30.752 [2024-07-23 04:28:39.385300] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:30.752 pt1 00:33:30.752 04:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:33:30.752 04:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:30.752 04:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:30.752 04:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:30.752 04:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:30.752 04:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:30.752 04:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:30.752 04:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:30.752 04:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:30.752 04:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:30.752 04:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:30.752 04:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:31.028 04:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:31.028 "name": "raid_bdev1", 00:33:31.028 "uuid": "68c59678-0068-4e36-a845-a3a3b2415f43", 00:33:31.028 "strip_size_kb": 0, 00:33:31.028 "state": "configuring", 00:33:31.028 "raid_level": "raid1", 00:33:31.028 "superblock": true, 00:33:31.028 "num_base_bdevs": 2, 00:33:31.028 "num_base_bdevs_discovered": 1, 00:33:31.028 "num_base_bdevs_operational": 2, 00:33:31.028 "base_bdevs_list": [ 00:33:31.028 { 00:33:31.028 "name": "pt1", 00:33:31.028 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:31.028 "is_configured": true, 00:33:31.028 "data_offset": 256, 00:33:31.028 "data_size": 7936 00:33:31.028 }, 00:33:31.028 { 00:33:31.028 "name": null, 00:33:31.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:31.028 "is_configured": false, 00:33:31.028 "data_offset": 256, 00:33:31.028 "data_size": 7936 00:33:31.028 } 00:33:31.028 ] 00:33:31.028 }' 00:33:31.028 04:28:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:31.028 04:28:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:31.596 04:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:33:31.596 04:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:33:31.596 04:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:33:31.596 04:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:31.856 [2024-07-23 04:28:40.425322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:31.856 [2024-07-23 04:28:40.425395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:31.856 [2024-07-23 04:28:40.425427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041d80 00:33:31.856 [2024-07-23 04:28:40.425446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:31.856 [2024-07-23 04:28:40.425696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:31.856 [2024-07-23 04:28:40.425720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:31.856 [2024-07-23 04:28:40.425792] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:31.856 [2024-07-23 04:28:40.425831] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:31.856 [2024-07-23 04:28:40.425967] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:33:31.856 [2024-07-23 04:28:40.425984] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:31.856 [2024-07-23 04:28:40.426073] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:33:31.856 [2024-07-23 04:28:40.426203] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:33:31.856 [2024-07-23 04:28:40.426217] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:33:31.856 [2024-07-23 04:28:40.426315] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:31.856 pt2 00:33:31.856 04:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:33:31.856 04:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:33:31.856 04:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:31.856 04:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:31.856 04:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:31.856 04:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:31.856 04:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:31.856 04:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:31.856 04:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:31.856 04:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:31.856 04:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:31.856 04:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:31.856 04:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:31.856 04:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:32.115 04:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:32.115 "name": "raid_bdev1", 00:33:32.115 "uuid": "68c59678-0068-4e36-a845-a3a3b2415f43", 00:33:32.115 "strip_size_kb": 0, 00:33:32.115 "state": "online", 00:33:32.115 "raid_level": "raid1", 00:33:32.115 "superblock": true, 00:33:32.115 "num_base_bdevs": 2, 00:33:32.115 "num_base_bdevs_discovered": 2, 00:33:32.115 "num_base_bdevs_operational": 2, 00:33:32.115 "base_bdevs_list": [ 00:33:32.115 { 00:33:32.115 "name": "pt1", 00:33:32.115 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:32.115 "is_configured": true, 00:33:32.115 "data_offset": 256, 00:33:32.115 "data_size": 7936 00:33:32.115 }, 00:33:32.115 { 00:33:32.115 "name": "pt2", 00:33:32.115 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:32.115 "is_configured": true, 00:33:32.115 "data_offset": 256, 00:33:32.115 "data_size": 7936 00:33:32.115 } 00:33:32.115 ] 00:33:32.115 }' 00:33:32.115 04:28:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:32.115 04:28:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:32.683 04:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:33:32.683 04:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:33:32.683 04:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:32.683 04:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:32.683 04:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:32.683 04:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:33:32.683 04:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:32.683 04:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:32.683 [2024-07-23 04:28:41.460437] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:32.942 04:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:32.942 "name": "raid_bdev1", 00:33:32.942 "aliases": [ 00:33:32.942 "68c59678-0068-4e36-a845-a3a3b2415f43" 00:33:32.942 ], 00:33:32.942 "product_name": "Raid Volume", 00:33:32.942 "block_size": 4128, 00:33:32.942 "num_blocks": 7936, 00:33:32.943 "uuid": "68c59678-0068-4e36-a845-a3a3b2415f43", 00:33:32.943 "md_size": 32, 00:33:32.943 "md_interleave": true, 00:33:32.943 "dif_type": 0, 00:33:32.943 "assigned_rate_limits": { 00:33:32.943 "rw_ios_per_sec": 0, 00:33:32.943 "rw_mbytes_per_sec": 0, 00:33:32.943 "r_mbytes_per_sec": 0, 00:33:32.943 "w_mbytes_per_sec": 0 00:33:32.943 }, 00:33:32.943 "claimed": false, 00:33:32.943 "zoned": false, 00:33:32.943 "supported_io_types": { 00:33:32.943 "read": true, 00:33:32.943 "write": true, 00:33:32.943 "unmap": false, 00:33:32.943 "flush": false, 00:33:32.943 "reset": true, 00:33:32.943 "nvme_admin": false, 00:33:32.943 "nvme_io": false, 00:33:32.943 "nvme_io_md": false, 00:33:32.943 "write_zeroes": true, 00:33:32.943 "zcopy": false, 00:33:32.943 "get_zone_info": false, 00:33:32.943 "zone_management": false, 00:33:32.943 "zone_append": false, 00:33:32.943 "compare": false, 00:33:32.943 "compare_and_write": false, 00:33:32.943 "abort": false, 00:33:32.943 "seek_hole": false, 00:33:32.943 "seek_data": false, 00:33:32.943 "copy": false, 00:33:32.943 "nvme_iov_md": false 00:33:32.943 }, 00:33:32.943 "memory_domains": [ 00:33:32.943 { 00:33:32.943 "dma_device_id": "system", 00:33:32.943 "dma_device_type": 1 00:33:32.943 }, 00:33:32.943 { 00:33:32.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:32.943 "dma_device_type": 2 00:33:32.943 }, 00:33:32.943 { 00:33:32.943 "dma_device_id": "system", 00:33:32.943 "dma_device_type": 1 00:33:32.943 }, 00:33:32.943 { 00:33:32.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:32.943 "dma_device_type": 2 00:33:32.943 } 00:33:32.943 ], 00:33:32.943 "driver_specific": { 00:33:32.943 "raid": { 00:33:32.943 "uuid": "68c59678-0068-4e36-a845-a3a3b2415f43", 00:33:32.943 "strip_size_kb": 0, 00:33:32.943 "state": "online", 00:33:32.943 "raid_level": "raid1", 00:33:32.943 "superblock": true, 00:33:32.943 "num_base_bdevs": 2, 00:33:32.943 "num_base_bdevs_discovered": 2, 00:33:32.943 "num_base_bdevs_operational": 2, 00:33:32.943 "base_bdevs_list": [ 00:33:32.943 { 00:33:32.943 "name": "pt1", 00:33:32.943 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:32.943 "is_configured": true, 00:33:32.943 "data_offset": 256, 00:33:32.943 "data_size": 7936 00:33:32.943 }, 00:33:32.943 { 00:33:32.943 "name": "pt2", 00:33:32.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:32.943 "is_configured": true, 00:33:32.943 "data_offset": 256, 00:33:32.943 "data_size": 7936 00:33:32.943 } 00:33:32.943 ] 00:33:32.943 } 00:33:32.943 } 00:33:32.943 }' 00:33:32.943 04:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:32.943 04:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:33:32.943 pt2' 00:33:32.943 04:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:32.943 04:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:33:32.943 04:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:33.202 04:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:33.202 "name": "pt1", 00:33:33.202 "aliases": [ 00:33:33.202 "00000000-0000-0000-0000-000000000001" 00:33:33.202 ], 00:33:33.202 "product_name": "passthru", 00:33:33.202 "block_size": 4128, 00:33:33.202 "num_blocks": 8192, 00:33:33.202 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:33.202 "md_size": 32, 00:33:33.202 "md_interleave": true, 00:33:33.202 "dif_type": 0, 00:33:33.202 "assigned_rate_limits": { 00:33:33.202 "rw_ios_per_sec": 0, 00:33:33.202 "rw_mbytes_per_sec": 0, 00:33:33.202 "r_mbytes_per_sec": 0, 00:33:33.202 "w_mbytes_per_sec": 0 00:33:33.202 }, 00:33:33.202 "claimed": true, 00:33:33.202 "claim_type": "exclusive_write", 00:33:33.202 "zoned": false, 00:33:33.202 "supported_io_types": { 00:33:33.202 "read": true, 00:33:33.202 "write": true, 00:33:33.202 "unmap": true, 00:33:33.202 "flush": true, 00:33:33.202 "reset": true, 00:33:33.202 "nvme_admin": false, 00:33:33.202 "nvme_io": false, 00:33:33.202 "nvme_io_md": false, 00:33:33.202 "write_zeroes": true, 00:33:33.202 "zcopy": true, 00:33:33.202 "get_zone_info": false, 00:33:33.202 "zone_management": false, 00:33:33.202 "zone_append": false, 00:33:33.202 "compare": false, 00:33:33.202 "compare_and_write": false, 00:33:33.202 "abort": true, 00:33:33.202 "seek_hole": false, 00:33:33.202 "seek_data": false, 00:33:33.202 "copy": true, 00:33:33.202 "nvme_iov_md": false 00:33:33.202 }, 00:33:33.203 "memory_domains": [ 00:33:33.203 { 00:33:33.203 "dma_device_id": "system", 00:33:33.203 "dma_device_type": 1 00:33:33.203 }, 00:33:33.203 { 00:33:33.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:33.203 "dma_device_type": 2 00:33:33.203 } 00:33:33.203 ], 00:33:33.203 "driver_specific": { 00:33:33.203 "passthru": { 00:33:33.203 "name": "pt1", 00:33:33.203 "base_bdev_name": "malloc1" 00:33:33.203 } 00:33:33.203 } 00:33:33.203 }' 00:33:33.203 04:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:33.203 04:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:33.203 04:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:33.203 04:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:33.203 04:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:33.203 04:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:33.203 04:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:33.462 04:28:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:33.462 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:33.462 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:33.462 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:33.462 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:33.462 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:33.462 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:33:33.462 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:33.721 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:33.721 "name": "pt2", 00:33:33.721 "aliases": [ 00:33:33.721 "00000000-0000-0000-0000-000000000002" 00:33:33.721 ], 00:33:33.721 "product_name": "passthru", 00:33:33.721 "block_size": 4128, 00:33:33.721 "num_blocks": 8192, 00:33:33.721 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:33.721 "md_size": 32, 00:33:33.721 "md_interleave": true, 00:33:33.721 "dif_type": 0, 00:33:33.721 "assigned_rate_limits": { 00:33:33.721 "rw_ios_per_sec": 0, 00:33:33.721 "rw_mbytes_per_sec": 0, 00:33:33.721 "r_mbytes_per_sec": 0, 00:33:33.721 "w_mbytes_per_sec": 0 00:33:33.721 }, 00:33:33.721 "claimed": true, 00:33:33.721 "claim_type": "exclusive_write", 00:33:33.721 "zoned": false, 00:33:33.721 "supported_io_types": { 00:33:33.721 "read": true, 00:33:33.721 "write": true, 00:33:33.721 "unmap": true, 00:33:33.721 "flush": true, 00:33:33.721 "reset": true, 00:33:33.721 "nvme_admin": false, 00:33:33.721 "nvme_io": false, 00:33:33.721 "nvme_io_md": false, 00:33:33.721 "write_zeroes": true, 00:33:33.721 "zcopy": true, 00:33:33.721 "get_zone_info": false, 00:33:33.721 "zone_management": false, 00:33:33.721 "zone_append": false, 00:33:33.721 "compare": false, 00:33:33.721 "compare_and_write": false, 00:33:33.721 "abort": true, 00:33:33.721 "seek_hole": false, 00:33:33.721 "seek_data": false, 00:33:33.721 "copy": true, 00:33:33.721 "nvme_iov_md": false 00:33:33.721 }, 00:33:33.721 "memory_domains": [ 00:33:33.721 { 00:33:33.721 "dma_device_id": "system", 00:33:33.721 "dma_device_type": 1 00:33:33.721 }, 00:33:33.721 { 00:33:33.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:33.721 "dma_device_type": 2 00:33:33.721 } 00:33:33.721 ], 00:33:33.721 "driver_specific": { 00:33:33.721 "passthru": { 00:33:33.721 "name": "pt2", 00:33:33.721 "base_bdev_name": "malloc2" 00:33:33.721 } 00:33:33.721 } 00:33:33.721 }' 00:33:33.721 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:33.721 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:33.721 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:33.721 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:33.721 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:33.980 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:33.980 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:33.980 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:33.980 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:33.980 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:33.980 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:33.981 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:33.981 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:33.981 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:33:34.240 [2024-07-23 04:28:42.908387] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:34.240 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' 68c59678-0068-4e36-a845-a3a3b2415f43 '!=' 68c59678-0068-4e36-a845-a3a3b2415f43 ']' 00:33:34.240 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:33:34.240 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:33:34.240 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:33:34.240 04:28:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:33:34.499 [2024-07-23 04:28:43.136653] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:33:34.499 04:28:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:34.499 04:28:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:34.499 04:28:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:34.499 04:28:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:34.499 04:28:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:34.499 04:28:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:34.499 04:28:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:34.499 04:28:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:34.499 04:28:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:34.499 04:28:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:34.499 04:28:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:34.499 04:28:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:34.758 04:28:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:34.758 "name": "raid_bdev1", 00:33:34.758 "uuid": "68c59678-0068-4e36-a845-a3a3b2415f43", 00:33:34.758 "strip_size_kb": 0, 00:33:34.758 "state": "online", 00:33:34.758 "raid_level": "raid1", 00:33:34.758 "superblock": true, 00:33:34.758 "num_base_bdevs": 2, 00:33:34.758 "num_base_bdevs_discovered": 1, 00:33:34.758 "num_base_bdevs_operational": 1, 00:33:34.758 "base_bdevs_list": [ 00:33:34.758 { 00:33:34.758 "name": null, 00:33:34.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:34.758 "is_configured": false, 00:33:34.758 "data_offset": 256, 00:33:34.758 "data_size": 7936 00:33:34.758 }, 00:33:34.758 { 00:33:34.758 "name": "pt2", 00:33:34.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:34.758 "is_configured": true, 00:33:34.758 "data_offset": 256, 00:33:34.758 "data_size": 7936 00:33:34.758 } 00:33:34.758 ] 00:33:34.758 }' 00:33:34.758 04:28:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:34.758 04:28:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:35.325 04:28:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:35.584 [2024-07-23 04:28:44.187447] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:35.585 [2024-07-23 04:28:44.187483] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:35.585 [2024-07-23 04:28:44.187573] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:35.585 [2024-07-23 04:28:44.187636] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:35.585 [2024-07-23 04:28:44.187656] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:33:35.585 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:35.585 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:33:35.843 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:33:35.843 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:33:35.843 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:33:35.843 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:33:35.843 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:36.103 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:33:36.103 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:33:36.103 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:33:36.103 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:33:36.103 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:33:36.103 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:36.103 [2024-07-23 04:28:44.873275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:36.103 [2024-07-23 04:28:44.873364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:36.103 [2024-07-23 04:28:44.873395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:33:36.103 [2024-07-23 04:28:44.873413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:36.103 [2024-07-23 04:28:44.875878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:36.103 [2024-07-23 04:28:44.875914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:36.103 [2024-07-23 04:28:44.875975] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:36.103 [2024-07-23 04:28:44.876058] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:36.103 [2024-07-23 04:28:44.876179] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042680 00:33:36.103 [2024-07-23 04:28:44.876201] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:36.103 [2024-07-23 04:28:44.876289] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:33:36.103 [2024-07-23 04:28:44.876419] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042680 00:33:36.103 [2024-07-23 04:28:44.876432] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042680 00:33:36.103 [2024-07-23 04:28:44.876519] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:36.103 pt2 00:33:36.362 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:36.362 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:36.362 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:36.362 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:36.362 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:36.362 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:36.363 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:36.363 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:36.363 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:36.363 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:36.363 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:36.363 04:28:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:36.363 04:28:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:36.363 "name": "raid_bdev1", 00:33:36.363 "uuid": "68c59678-0068-4e36-a845-a3a3b2415f43", 00:33:36.363 "strip_size_kb": 0, 00:33:36.363 "state": "online", 00:33:36.363 "raid_level": "raid1", 00:33:36.363 "superblock": true, 00:33:36.363 "num_base_bdevs": 2, 00:33:36.363 "num_base_bdevs_discovered": 1, 00:33:36.363 "num_base_bdevs_operational": 1, 00:33:36.363 "base_bdevs_list": [ 00:33:36.363 { 00:33:36.363 "name": null, 00:33:36.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:36.363 "is_configured": false, 00:33:36.363 "data_offset": 256, 00:33:36.363 "data_size": 7936 00:33:36.363 }, 00:33:36.363 { 00:33:36.363 "name": "pt2", 00:33:36.363 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:36.363 "is_configured": true, 00:33:36.363 "data_offset": 256, 00:33:36.363 "data_size": 7936 00:33:36.363 } 00:33:36.363 ] 00:33:36.363 }' 00:33:36.363 04:28:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:36.363 04:28:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:36.930 04:28:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:37.189 [2024-07-23 04:28:45.831866] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:37.189 [2024-07-23 04:28:45.831905] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:37.189 [2024-07-23 04:28:45.831995] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:37.189 [2024-07-23 04:28:45.832058] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:37.189 [2024-07-23 04:28:45.832075] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state offline 00:33:37.189 04:28:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:37.189 04:28:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:33:37.449 04:28:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:33:37.449 04:28:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:33:37.449 04:28:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:33:37.449 04:28:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:37.709 [2024-07-23 04:28:46.272999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:37.709 [2024-07-23 04:28:46.273067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:37.709 [2024-07-23 04:28:46.273096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:33:37.709 [2024-07-23 04:28:46.273112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:37.709 [2024-07-23 04:28:46.275608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:37.709 [2024-07-23 04:28:46.275644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:37.709 [2024-07-23 04:28:46.275714] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:37.709 [2024-07-23 04:28:46.275813] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:37.709 [2024-07-23 04:28:46.275978] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:33:37.709 [2024-07-23 04:28:46.275996] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:37.709 [2024-07-23 04:28:46.276022] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042f80 name raid_bdev1, state configuring 00:33:37.709 [2024-07-23 04:28:46.276120] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:37.709 [2024-07-23 04:28:46.276223] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:33:37.709 [2024-07-23 04:28:46.276238] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:37.709 [2024-07-23 04:28:46.276324] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:33:37.709 [2024-07-23 04:28:46.276444] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:33:37.709 [2024-07-23 04:28:46.276461] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:33:37.709 [2024-07-23 04:28:46.276562] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:37.709 pt1 00:33:37.709 04:28:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:33:37.709 04:28:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:37.709 04:28:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:37.709 04:28:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:37.709 04:28:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:37.709 04:28:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:37.709 04:28:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:37.709 04:28:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:37.709 04:28:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:37.709 04:28:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:37.709 04:28:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:37.709 04:28:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:37.709 04:28:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:37.969 04:28:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:37.969 "name": "raid_bdev1", 00:33:37.969 "uuid": "68c59678-0068-4e36-a845-a3a3b2415f43", 00:33:37.969 "strip_size_kb": 0, 00:33:37.969 "state": "online", 00:33:37.969 "raid_level": "raid1", 00:33:37.969 "superblock": true, 00:33:37.969 "num_base_bdevs": 2, 00:33:37.969 "num_base_bdevs_discovered": 1, 00:33:37.969 "num_base_bdevs_operational": 1, 00:33:37.969 "base_bdevs_list": [ 00:33:37.969 { 00:33:37.969 "name": null, 00:33:37.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:37.969 "is_configured": false, 00:33:37.969 "data_offset": 256, 00:33:37.969 "data_size": 7936 00:33:37.969 }, 00:33:37.969 { 00:33:37.969 "name": "pt2", 00:33:37.969 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:37.969 "is_configured": true, 00:33:37.969 "data_offset": 256, 00:33:37.969 "data_size": 7936 00:33:37.969 } 00:33:37.969 ] 00:33:37.969 }' 00:33:37.969 04:28:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:37.969 04:28:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:38.906 04:28:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:33:38.906 04:28:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:38.906 04:28:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:33:38.906 04:28:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:38.906 04:28:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:33:39.165 [2024-07-23 04:28:47.813688] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:39.165 04:28:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 68c59678-0068-4e36-a845-a3a3b2415f43 '!=' 68c59678-0068-4e36-a845-a3a3b2415f43 ']' 00:33:39.165 04:28:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 2827810 00:33:39.165 04:28:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 2827810 ']' 00:33:39.165 04:28:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 2827810 00:33:39.165 04:28:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:33:39.165 04:28:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:39.165 04:28:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2827810 00:33:39.165 04:28:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:39.165 04:28:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:39.165 04:28:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2827810' 00:33:39.165 killing process with pid 2827810 00:33:39.165 04:28:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # kill 2827810 00:33:39.165 [2024-07-23 04:28:47.869801] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:39.165 [2024-07-23 04:28:47.869911] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:39.165 04:28:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # wait 2827810 00:33:39.165 [2024-07-23 04:28:47.869970] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:39.165 [2024-07-23 04:28:47.869989] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:33:39.425 [2024-07-23 04:28:48.062201] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:41.330 04:28:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:33:41.330 00:33:41.330 real 0m16.748s 00:33:41.330 user 0m28.505s 00:33:41.330 sys 0m2.935s 00:33:41.330 04:28:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:41.330 04:28:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:41.330 ************************************ 00:33:41.330 END TEST raid_superblock_test_md_interleaved 00:33:41.330 ************************************ 00:33:41.330 04:28:49 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:33:41.330 04:28:49 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:33:41.330 04:28:49 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:33:41.330 04:28:49 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:41.330 04:28:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:41.330 ************************************ 00:33:41.331 START TEST raid_rebuild_test_sb_md_interleaved 00:33:41.331 ************************************ 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false false 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=2830808 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 2830808 /var/tmp/spdk-raid.sock 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 2830808 ']' 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:41.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:41.331 04:28:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:41.331 [2024-07-23 04:28:50.049187] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:33:41.331 [2024-07-23 04:28:50.049463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830808 ] 00:33:41.331 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:41.331 Zero copy mechanism will not be used. 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.590 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:41.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.591 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:41.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.591 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:41.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.591 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:41.849 [2024-07-23 04:28:50.415168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.108 [2024-07-23 04:28:50.716090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.367 [2024-07-23 04:28:51.065681] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:42.367 [2024-07-23 04:28:51.065732] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:42.625 04:28:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:42.625 04:28:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:33:42.625 04:28:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:42.625 04:28:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:33:42.884 BaseBdev1_malloc 00:33:42.884 04:28:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:43.143 [2024-07-23 04:28:51.734997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:43.143 [2024-07-23 04:28:51.735064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:43.143 [2024-07-23 04:28:51.735095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:33:43.143 [2024-07-23 04:28:51.735115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:43.143 [2024-07-23 04:28:51.737561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:43.143 [2024-07-23 04:28:51.737600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:43.143 BaseBdev1 00:33:43.143 04:28:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:43.143 04:28:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:33:43.401 BaseBdev2_malloc 00:33:43.401 04:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:43.659 [2024-07-23 04:28:52.248564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:43.660 [2024-07-23 04:28:52.248631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:43.660 [2024-07-23 04:28:52.248658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:33:43.660 [2024-07-23 04:28:52.248679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:43.660 [2024-07-23 04:28:52.251085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:43.660 [2024-07-23 04:28:52.251122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:43.660 BaseBdev2 00:33:43.660 04:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:33:43.984 spare_malloc 00:33:43.984 04:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:44.243 spare_delay 00:33:44.243 04:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:44.243 [2024-07-23 04:28:52.958460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:44.243 [2024-07-23 04:28:52.958519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:44.243 [2024-07-23 04:28:52.958550] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:33:44.243 [2024-07-23 04:28:52.958574] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:44.243 [2024-07-23 04:28:52.961042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:44.243 [2024-07-23 04:28:52.961077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:44.243 spare 00:33:44.243 04:28:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:33:44.501 [2024-07-23 04:28:53.179099] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:44.501 [2024-07-23 04:28:53.181477] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:44.501 [2024-07-23 04:28:53.181736] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:33:44.501 [2024-07-23 04:28:53.181762] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:44.501 [2024-07-23 04:28:53.181864] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:33:44.501 [2024-07-23 04:28:53.182008] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:33:44.501 [2024-07-23 04:28:53.182022] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:33:44.501 [2024-07-23 04:28:53.182129] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:44.501 04:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:44.501 04:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:44.501 04:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:44.501 04:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:44.501 04:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:44.501 04:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:44.501 04:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:44.501 04:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:44.501 04:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:44.501 04:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:44.501 04:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:44.501 04:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:44.759 04:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:44.759 "name": "raid_bdev1", 00:33:44.759 "uuid": "956b10cb-da88-4337-bae4-51621ee8c4c2", 00:33:44.759 "strip_size_kb": 0, 00:33:44.759 "state": "online", 00:33:44.759 "raid_level": "raid1", 00:33:44.759 "superblock": true, 00:33:44.759 "num_base_bdevs": 2, 00:33:44.759 "num_base_bdevs_discovered": 2, 00:33:44.759 "num_base_bdevs_operational": 2, 00:33:44.759 "base_bdevs_list": [ 00:33:44.759 { 00:33:44.759 "name": "BaseBdev1", 00:33:44.759 "uuid": "d9bff1c9-5580-547e-8dd6-b049dee93df9", 00:33:44.759 "is_configured": true, 00:33:44.759 "data_offset": 256, 00:33:44.759 "data_size": 7936 00:33:44.759 }, 00:33:44.759 { 00:33:44.759 "name": "BaseBdev2", 00:33:44.759 "uuid": "84d53eb3-ee0a-5a45-a383-a7b1fffca23d", 00:33:44.759 "is_configured": true, 00:33:44.759 "data_offset": 256, 00:33:44.759 "data_size": 7936 00:33:44.759 } 00:33:44.759 ] 00:33:44.759 }' 00:33:44.759 04:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:44.759 04:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:45.325 04:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:45.325 04:28:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:33:45.584 [2024-07-23 04:28:54.170075] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:45.584 04:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:33:45.584 04:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:45.584 04:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:45.843 04:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:33:45.843 04:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:33:45.843 04:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:33:45.843 04:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:33:45.843 [2024-07-23 04:28:54.614938] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:46.102 04:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:46.102 04:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:46.102 04:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:46.102 04:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:46.102 04:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:46.102 04:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:46.102 04:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:46.102 04:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:46.102 04:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:46.102 04:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:46.102 04:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:46.102 04:28:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:46.360 04:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:46.360 "name": "raid_bdev1", 00:33:46.360 "uuid": "956b10cb-da88-4337-bae4-51621ee8c4c2", 00:33:46.360 "strip_size_kb": 0, 00:33:46.360 "state": "online", 00:33:46.360 "raid_level": "raid1", 00:33:46.360 "superblock": true, 00:33:46.360 "num_base_bdevs": 2, 00:33:46.360 "num_base_bdevs_discovered": 1, 00:33:46.360 "num_base_bdevs_operational": 1, 00:33:46.360 "base_bdevs_list": [ 00:33:46.360 { 00:33:46.360 "name": null, 00:33:46.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:46.360 "is_configured": false, 00:33:46.360 "data_offset": 256, 00:33:46.360 "data_size": 7936 00:33:46.360 }, 00:33:46.360 { 00:33:46.360 "name": "BaseBdev2", 00:33:46.360 "uuid": "84d53eb3-ee0a-5a45-a383-a7b1fffca23d", 00:33:46.360 "is_configured": true, 00:33:46.360 "data_offset": 256, 00:33:46.360 "data_size": 7936 00:33:46.360 } 00:33:46.360 ] 00:33:46.360 }' 00:33:46.360 04:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:46.360 04:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:47.295 04:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:47.295 [2024-07-23 04:28:55.958662] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:47.295 [2024-07-23 04:28:55.982525] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:33:47.295 [2024-07-23 04:28:55.984866] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:47.295 04:28:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:33:48.231 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:48.231 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:48.231 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:48.231 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:48.231 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:48.231 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:48.231 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:48.490 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:48.490 "name": "raid_bdev1", 00:33:48.490 "uuid": "956b10cb-da88-4337-bae4-51621ee8c4c2", 00:33:48.490 "strip_size_kb": 0, 00:33:48.490 "state": "online", 00:33:48.490 "raid_level": "raid1", 00:33:48.490 "superblock": true, 00:33:48.490 "num_base_bdevs": 2, 00:33:48.490 "num_base_bdevs_discovered": 2, 00:33:48.490 "num_base_bdevs_operational": 2, 00:33:48.490 "process": { 00:33:48.490 "type": "rebuild", 00:33:48.490 "target": "spare", 00:33:48.490 "progress": { 00:33:48.490 "blocks": 3072, 00:33:48.490 "percent": 38 00:33:48.490 } 00:33:48.490 }, 00:33:48.490 "base_bdevs_list": [ 00:33:48.490 { 00:33:48.490 "name": "spare", 00:33:48.490 "uuid": "7ef6dee1-990b-56e9-aa23-96c1596c032c", 00:33:48.490 "is_configured": true, 00:33:48.490 "data_offset": 256, 00:33:48.490 "data_size": 7936 00:33:48.490 }, 00:33:48.490 { 00:33:48.490 "name": "BaseBdev2", 00:33:48.490 "uuid": "84d53eb3-ee0a-5a45-a383-a7b1fffca23d", 00:33:48.490 "is_configured": true, 00:33:48.490 "data_offset": 256, 00:33:48.490 "data_size": 7936 00:33:48.490 } 00:33:48.490 ] 00:33:48.490 }' 00:33:48.490 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:48.490 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:48.491 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:48.749 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:48.749 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:48.749 [2024-07-23 04:28:57.513799] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:49.008 [2024-07-23 04:28:57.597955] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:49.008 [2024-07-23 04:28:57.598028] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:49.008 [2024-07-23 04:28:57.598050] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:49.008 [2024-07-23 04:28:57.598069] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:49.008 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:49.008 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:49.008 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:49.008 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:49.008 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:49.008 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:49.008 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:49.008 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:49.008 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:49.008 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:49.008 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:49.008 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:49.267 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:49.267 "name": "raid_bdev1", 00:33:49.267 "uuid": "956b10cb-da88-4337-bae4-51621ee8c4c2", 00:33:49.267 "strip_size_kb": 0, 00:33:49.267 "state": "online", 00:33:49.267 "raid_level": "raid1", 00:33:49.267 "superblock": true, 00:33:49.267 "num_base_bdevs": 2, 00:33:49.267 "num_base_bdevs_discovered": 1, 00:33:49.267 "num_base_bdevs_operational": 1, 00:33:49.267 "base_bdevs_list": [ 00:33:49.267 { 00:33:49.267 "name": null, 00:33:49.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:49.267 "is_configured": false, 00:33:49.267 "data_offset": 256, 00:33:49.267 "data_size": 7936 00:33:49.267 }, 00:33:49.267 { 00:33:49.267 "name": "BaseBdev2", 00:33:49.267 "uuid": "84d53eb3-ee0a-5a45-a383-a7b1fffca23d", 00:33:49.267 "is_configured": true, 00:33:49.267 "data_offset": 256, 00:33:49.267 "data_size": 7936 00:33:49.267 } 00:33:49.267 ] 00:33:49.267 }' 00:33:49.267 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:49.267 04:28:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:50.202 04:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:50.202 04:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:50.202 04:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:50.202 04:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:50.202 04:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:50.202 04:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:50.202 04:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:50.202 04:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:50.202 "name": "raid_bdev1", 00:33:50.202 "uuid": "956b10cb-da88-4337-bae4-51621ee8c4c2", 00:33:50.202 "strip_size_kb": 0, 00:33:50.202 "state": "online", 00:33:50.202 "raid_level": "raid1", 00:33:50.202 "superblock": true, 00:33:50.202 "num_base_bdevs": 2, 00:33:50.202 "num_base_bdevs_discovered": 1, 00:33:50.202 "num_base_bdevs_operational": 1, 00:33:50.202 "base_bdevs_list": [ 00:33:50.202 { 00:33:50.202 "name": null, 00:33:50.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:50.203 "is_configured": false, 00:33:50.203 "data_offset": 256, 00:33:50.203 "data_size": 7936 00:33:50.203 }, 00:33:50.203 { 00:33:50.203 "name": "BaseBdev2", 00:33:50.203 "uuid": "84d53eb3-ee0a-5a45-a383-a7b1fffca23d", 00:33:50.203 "is_configured": true, 00:33:50.203 "data_offset": 256, 00:33:50.203 "data_size": 7936 00:33:50.203 } 00:33:50.203 ] 00:33:50.203 }' 00:33:50.203 04:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:50.203 04:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:50.203 04:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:50.203 04:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:50.203 04:28:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:50.461 [2024-07-23 04:28:59.192466] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:50.461 [2024-07-23 04:28:59.218356] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:33:50.461 [2024-07-23 04:28:59.220654] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:50.461 04:28:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:33:51.838 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:51.839 "name": "raid_bdev1", 00:33:51.839 "uuid": "956b10cb-da88-4337-bae4-51621ee8c4c2", 00:33:51.839 "strip_size_kb": 0, 00:33:51.839 "state": "online", 00:33:51.839 "raid_level": "raid1", 00:33:51.839 "superblock": true, 00:33:51.839 "num_base_bdevs": 2, 00:33:51.839 "num_base_bdevs_discovered": 2, 00:33:51.839 "num_base_bdevs_operational": 2, 00:33:51.839 "process": { 00:33:51.839 "type": "rebuild", 00:33:51.839 "target": "spare", 00:33:51.839 "progress": { 00:33:51.839 "blocks": 3072, 00:33:51.839 "percent": 38 00:33:51.839 } 00:33:51.839 }, 00:33:51.839 "base_bdevs_list": [ 00:33:51.839 { 00:33:51.839 "name": "spare", 00:33:51.839 "uuid": "7ef6dee1-990b-56e9-aa23-96c1596c032c", 00:33:51.839 "is_configured": true, 00:33:51.839 "data_offset": 256, 00:33:51.839 "data_size": 7936 00:33:51.839 }, 00:33:51.839 { 00:33:51.839 "name": "BaseBdev2", 00:33:51.839 "uuid": "84d53eb3-ee0a-5a45-a383-a7b1fffca23d", 00:33:51.839 "is_configured": true, 00:33:51.839 "data_offset": 256, 00:33:51.839 "data_size": 7936 00:33:51.839 } 00:33:51.839 ] 00:33:51.839 }' 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:33:51.839 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=1226 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:51.839 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:52.097 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:52.097 "name": "raid_bdev1", 00:33:52.097 "uuid": "956b10cb-da88-4337-bae4-51621ee8c4c2", 00:33:52.097 "strip_size_kb": 0, 00:33:52.097 "state": "online", 00:33:52.097 "raid_level": "raid1", 00:33:52.097 "superblock": true, 00:33:52.097 "num_base_bdevs": 2, 00:33:52.097 "num_base_bdevs_discovered": 2, 00:33:52.097 "num_base_bdevs_operational": 2, 00:33:52.097 "process": { 00:33:52.097 "type": "rebuild", 00:33:52.097 "target": "spare", 00:33:52.097 "progress": { 00:33:52.097 "blocks": 3584, 00:33:52.097 "percent": 45 00:33:52.097 } 00:33:52.097 }, 00:33:52.097 "base_bdevs_list": [ 00:33:52.097 { 00:33:52.097 "name": "spare", 00:33:52.097 "uuid": "7ef6dee1-990b-56e9-aa23-96c1596c032c", 00:33:52.097 "is_configured": true, 00:33:52.097 "data_offset": 256, 00:33:52.097 "data_size": 7936 00:33:52.097 }, 00:33:52.097 { 00:33:52.097 "name": "BaseBdev2", 00:33:52.097 "uuid": "84d53eb3-ee0a-5a45-a383-a7b1fffca23d", 00:33:52.097 "is_configured": true, 00:33:52.097 "data_offset": 256, 00:33:52.097 "data_size": 7936 00:33:52.097 } 00:33:52.097 ] 00:33:52.097 }' 00:33:52.097 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:52.097 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:52.097 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:52.097 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:52.097 04:29:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:53.033 04:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:53.033 04:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:53.033 04:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:53.033 04:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:53.033 04:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:53.033 04:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:53.033 04:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:53.033 04:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:53.292 04:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:53.292 "name": "raid_bdev1", 00:33:53.292 "uuid": "956b10cb-da88-4337-bae4-51621ee8c4c2", 00:33:53.292 "strip_size_kb": 0, 00:33:53.292 "state": "online", 00:33:53.292 "raid_level": "raid1", 00:33:53.292 "superblock": true, 00:33:53.292 "num_base_bdevs": 2, 00:33:53.292 "num_base_bdevs_discovered": 2, 00:33:53.292 "num_base_bdevs_operational": 2, 00:33:53.292 "process": { 00:33:53.292 "type": "rebuild", 00:33:53.292 "target": "spare", 00:33:53.292 "progress": { 00:33:53.292 "blocks": 6912, 00:33:53.292 "percent": 87 00:33:53.292 } 00:33:53.292 }, 00:33:53.292 "base_bdevs_list": [ 00:33:53.292 { 00:33:53.292 "name": "spare", 00:33:53.292 "uuid": "7ef6dee1-990b-56e9-aa23-96c1596c032c", 00:33:53.292 "is_configured": true, 00:33:53.292 "data_offset": 256, 00:33:53.292 "data_size": 7936 00:33:53.292 }, 00:33:53.292 { 00:33:53.292 "name": "BaseBdev2", 00:33:53.292 "uuid": "84d53eb3-ee0a-5a45-a383-a7b1fffca23d", 00:33:53.292 "is_configured": true, 00:33:53.292 "data_offset": 256, 00:33:53.292 "data_size": 7936 00:33:53.292 } 00:33:53.292 ] 00:33:53.292 }' 00:33:53.292 04:29:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:53.292 04:29:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:53.292 04:29:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:53.292 04:29:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:53.292 04:29:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:53.860 [2024-07-23 04:29:02.345740] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:53.860 [2024-07-23 04:29:02.345818] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:53.860 [2024-07-23 04:29:02.345925] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:54.429 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:54.429 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:54.429 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:54.429 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:54.429 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:54.429 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:54.429 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:54.429 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:54.688 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:54.688 "name": "raid_bdev1", 00:33:54.688 "uuid": "956b10cb-da88-4337-bae4-51621ee8c4c2", 00:33:54.688 "strip_size_kb": 0, 00:33:54.688 "state": "online", 00:33:54.688 "raid_level": "raid1", 00:33:54.688 "superblock": true, 00:33:54.688 "num_base_bdevs": 2, 00:33:54.688 "num_base_bdevs_discovered": 2, 00:33:54.688 "num_base_bdevs_operational": 2, 00:33:54.688 "base_bdevs_list": [ 00:33:54.688 { 00:33:54.688 "name": "spare", 00:33:54.688 "uuid": "7ef6dee1-990b-56e9-aa23-96c1596c032c", 00:33:54.688 "is_configured": true, 00:33:54.688 "data_offset": 256, 00:33:54.688 "data_size": 7936 00:33:54.688 }, 00:33:54.688 { 00:33:54.688 "name": "BaseBdev2", 00:33:54.688 "uuid": "84d53eb3-ee0a-5a45-a383-a7b1fffca23d", 00:33:54.688 "is_configured": true, 00:33:54.688 "data_offset": 256, 00:33:54.688 "data_size": 7936 00:33:54.688 } 00:33:54.688 ] 00:33:54.688 }' 00:33:54.688 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:54.688 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:54.688 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:54.688 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:33:54.688 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:33:54.688 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:54.688 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:54.688 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:54.688 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:54.688 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:54.688 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:54.688 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:54.948 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:54.948 "name": "raid_bdev1", 00:33:54.948 "uuid": "956b10cb-da88-4337-bae4-51621ee8c4c2", 00:33:54.948 "strip_size_kb": 0, 00:33:54.948 "state": "online", 00:33:54.948 "raid_level": "raid1", 00:33:54.948 "superblock": true, 00:33:54.948 "num_base_bdevs": 2, 00:33:54.948 "num_base_bdevs_discovered": 2, 00:33:54.948 "num_base_bdevs_operational": 2, 00:33:54.948 "base_bdevs_list": [ 00:33:54.948 { 00:33:54.948 "name": "spare", 00:33:54.948 "uuid": "7ef6dee1-990b-56e9-aa23-96c1596c032c", 00:33:54.948 "is_configured": true, 00:33:54.948 "data_offset": 256, 00:33:54.948 "data_size": 7936 00:33:54.948 }, 00:33:54.948 { 00:33:54.948 "name": "BaseBdev2", 00:33:54.948 "uuid": "84d53eb3-ee0a-5a45-a383-a7b1fffca23d", 00:33:54.948 "is_configured": true, 00:33:54.948 "data_offset": 256, 00:33:54.948 "data_size": 7936 00:33:54.948 } 00:33:54.948 ] 00:33:54.948 }' 00:33:54.948 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:54.948 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:54.948 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:54.948 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:54.948 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:54.948 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:54.948 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:54.948 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:54.948 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:54.948 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:54.948 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:54.948 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:54.948 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:54.948 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:54.948 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:54.948 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:55.207 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:55.207 "name": "raid_bdev1", 00:33:55.207 "uuid": "956b10cb-da88-4337-bae4-51621ee8c4c2", 00:33:55.207 "strip_size_kb": 0, 00:33:55.207 "state": "online", 00:33:55.207 "raid_level": "raid1", 00:33:55.207 "superblock": true, 00:33:55.207 "num_base_bdevs": 2, 00:33:55.207 "num_base_bdevs_discovered": 2, 00:33:55.207 "num_base_bdevs_operational": 2, 00:33:55.207 "base_bdevs_list": [ 00:33:55.207 { 00:33:55.207 "name": "spare", 00:33:55.207 "uuid": "7ef6dee1-990b-56e9-aa23-96c1596c032c", 00:33:55.207 "is_configured": true, 00:33:55.207 "data_offset": 256, 00:33:55.207 "data_size": 7936 00:33:55.207 }, 00:33:55.207 { 00:33:55.207 "name": "BaseBdev2", 00:33:55.207 "uuid": "84d53eb3-ee0a-5a45-a383-a7b1fffca23d", 00:33:55.207 "is_configured": true, 00:33:55.207 "data_offset": 256, 00:33:55.207 "data_size": 7936 00:33:55.207 } 00:33:55.207 ] 00:33:55.207 }' 00:33:55.207 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:55.207 04:29:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:55.776 04:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:56.035 [2024-07-23 04:29:04.629118] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:56.035 [2024-07-23 04:29:04.629164] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:56.035 [2024-07-23 04:29:04.629259] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:56.035 [2024-07-23 04:29:04.629339] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:56.035 [2024-07-23 04:29:04.629356] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:33:56.035 04:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:56.035 04:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:33:56.294 04:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:33:56.294 04:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:33:56.294 04:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:33:56.294 04:29:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:56.553 04:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:56.553 [2024-07-23 04:29:05.230690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:56.553 [2024-07-23 04:29:05.230758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:56.553 [2024-07-23 04:29:05.230789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:33:56.553 [2024-07-23 04:29:05.230808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:56.553 [2024-07-23 04:29:05.233334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:56.553 [2024-07-23 04:29:05.233369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:56.553 [2024-07-23 04:29:05.233450] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:56.553 [2024-07-23 04:29:05.233522] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:56.553 [2024-07-23 04:29:05.233662] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:56.553 spare 00:33:56.553 04:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:56.553 04:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:56.553 04:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:56.553 04:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:56.553 04:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:56.553 04:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:56.553 04:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:56.553 04:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:56.553 04:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:56.553 04:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:56.553 04:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:56.553 04:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:56.553 [2024-07-23 04:29:05.334006] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042c80 00:33:56.553 [2024-07-23 04:29:05.334048] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:56.553 [2024-07-23 04:29:05.334177] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:33:56.554 [2024-07-23 04:29:05.334353] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042c80 00:33:56.554 [2024-07-23 04:29:05.334368] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042c80 00:33:56.554 [2024-07-23 04:29:05.334479] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:56.813 04:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:56.813 "name": "raid_bdev1", 00:33:56.813 "uuid": "956b10cb-da88-4337-bae4-51621ee8c4c2", 00:33:56.813 "strip_size_kb": 0, 00:33:56.813 "state": "online", 00:33:56.813 "raid_level": "raid1", 00:33:56.813 "superblock": true, 00:33:56.813 "num_base_bdevs": 2, 00:33:56.813 "num_base_bdevs_discovered": 2, 00:33:56.813 "num_base_bdevs_operational": 2, 00:33:56.813 "base_bdevs_list": [ 00:33:56.813 { 00:33:56.813 "name": "spare", 00:33:56.813 "uuid": "7ef6dee1-990b-56e9-aa23-96c1596c032c", 00:33:56.813 "is_configured": true, 00:33:56.813 "data_offset": 256, 00:33:56.813 "data_size": 7936 00:33:56.813 }, 00:33:56.813 { 00:33:56.813 "name": "BaseBdev2", 00:33:56.813 "uuid": "84d53eb3-ee0a-5a45-a383-a7b1fffca23d", 00:33:56.813 "is_configured": true, 00:33:56.813 "data_offset": 256, 00:33:56.813 "data_size": 7936 00:33:56.813 } 00:33:56.813 ] 00:33:56.813 }' 00:33:56.813 04:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:56.813 04:29:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:57.381 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:57.381 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:57.381 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:57.381 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:57.381 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:57.381 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:57.381 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:57.641 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:57.641 "name": "raid_bdev1", 00:33:57.641 "uuid": "956b10cb-da88-4337-bae4-51621ee8c4c2", 00:33:57.641 "strip_size_kb": 0, 00:33:57.641 "state": "online", 00:33:57.641 "raid_level": "raid1", 00:33:57.641 "superblock": true, 00:33:57.641 "num_base_bdevs": 2, 00:33:57.641 "num_base_bdevs_discovered": 2, 00:33:57.641 "num_base_bdevs_operational": 2, 00:33:57.641 "base_bdevs_list": [ 00:33:57.641 { 00:33:57.641 "name": "spare", 00:33:57.641 "uuid": "7ef6dee1-990b-56e9-aa23-96c1596c032c", 00:33:57.641 "is_configured": true, 00:33:57.641 "data_offset": 256, 00:33:57.641 "data_size": 7936 00:33:57.641 }, 00:33:57.641 { 00:33:57.641 "name": "BaseBdev2", 00:33:57.641 "uuid": "84d53eb3-ee0a-5a45-a383-a7b1fffca23d", 00:33:57.641 "is_configured": true, 00:33:57.641 "data_offset": 256, 00:33:57.641 "data_size": 7936 00:33:57.641 } 00:33:57.641 ] 00:33:57.641 }' 00:33:57.641 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:57.641 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:57.641 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:57.641 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:57.641 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:57.641 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:33:57.943 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:33:57.943 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:58.206 [2024-07-23 04:29:06.710795] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:58.206 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:58.206 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:58.206 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:58.206 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:58.206 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:58.206 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:58.206 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:58.206 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:58.206 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:58.206 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:58.206 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:58.206 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:58.206 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:58.206 "name": "raid_bdev1", 00:33:58.206 "uuid": "956b10cb-da88-4337-bae4-51621ee8c4c2", 00:33:58.206 "strip_size_kb": 0, 00:33:58.206 "state": "online", 00:33:58.206 "raid_level": "raid1", 00:33:58.206 "superblock": true, 00:33:58.206 "num_base_bdevs": 2, 00:33:58.206 "num_base_bdevs_discovered": 1, 00:33:58.206 "num_base_bdevs_operational": 1, 00:33:58.206 "base_bdevs_list": [ 00:33:58.206 { 00:33:58.206 "name": null, 00:33:58.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:58.206 "is_configured": false, 00:33:58.206 "data_offset": 256, 00:33:58.206 "data_size": 7936 00:33:58.206 }, 00:33:58.206 { 00:33:58.206 "name": "BaseBdev2", 00:33:58.206 "uuid": "84d53eb3-ee0a-5a45-a383-a7b1fffca23d", 00:33:58.206 "is_configured": true, 00:33:58.206 "data_offset": 256, 00:33:58.206 "data_size": 7936 00:33:58.206 } 00:33:58.206 ] 00:33:58.206 }' 00:33:58.206 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:58.206 04:29:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:58.774 04:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:59.034 [2024-07-23 04:29:07.617263] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:59.034 [2024-07-23 04:29:07.617465] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:59.034 [2024-07-23 04:29:07.617490] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:59.034 [2024-07-23 04:29:07.617533] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:59.034 [2024-07-23 04:29:07.642258] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:33:59.034 [2024-07-23 04:29:07.644592] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:59.034 04:29:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:33:59.970 04:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:59.970 04:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:59.970 04:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:59.970 04:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:59.970 04:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:59.970 04:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:59.970 04:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:00.229 04:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:00.229 "name": "raid_bdev1", 00:34:00.229 "uuid": "956b10cb-da88-4337-bae4-51621ee8c4c2", 00:34:00.229 "strip_size_kb": 0, 00:34:00.229 "state": "online", 00:34:00.229 "raid_level": "raid1", 00:34:00.229 "superblock": true, 00:34:00.229 "num_base_bdevs": 2, 00:34:00.229 "num_base_bdevs_discovered": 2, 00:34:00.229 "num_base_bdevs_operational": 2, 00:34:00.229 "process": { 00:34:00.229 "type": "rebuild", 00:34:00.229 "target": "spare", 00:34:00.229 "progress": { 00:34:00.229 "blocks": 3072, 00:34:00.229 "percent": 38 00:34:00.229 } 00:34:00.229 }, 00:34:00.229 "base_bdevs_list": [ 00:34:00.229 { 00:34:00.229 "name": "spare", 00:34:00.229 "uuid": "7ef6dee1-990b-56e9-aa23-96c1596c032c", 00:34:00.229 "is_configured": true, 00:34:00.229 "data_offset": 256, 00:34:00.229 "data_size": 7936 00:34:00.229 }, 00:34:00.229 { 00:34:00.229 "name": "BaseBdev2", 00:34:00.229 "uuid": "84d53eb3-ee0a-5a45-a383-a7b1fffca23d", 00:34:00.229 "is_configured": true, 00:34:00.229 "data_offset": 256, 00:34:00.229 "data_size": 7936 00:34:00.229 } 00:34:00.229 ] 00:34:00.229 }' 00:34:00.229 04:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:00.229 04:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:00.229 04:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:00.229 04:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:00.229 04:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:34:00.488 [2024-07-23 04:29:09.153484] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:00.488 [2024-07-23 04:29:09.156802] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:00.488 [2024-07-23 04:29:09.156862] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:00.488 [2024-07-23 04:29:09.156883] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:00.488 [2024-07-23 04:29:09.156901] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:00.488 04:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:00.488 04:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:00.488 04:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:00.488 04:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:00.488 04:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:00.488 04:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:00.488 04:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:00.488 04:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:00.488 04:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:00.488 04:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:00.488 04:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:00.488 04:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:00.747 04:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:00.747 "name": "raid_bdev1", 00:34:00.747 "uuid": "956b10cb-da88-4337-bae4-51621ee8c4c2", 00:34:00.747 "strip_size_kb": 0, 00:34:00.747 "state": "online", 00:34:00.747 "raid_level": "raid1", 00:34:00.747 "superblock": true, 00:34:00.747 "num_base_bdevs": 2, 00:34:00.747 "num_base_bdevs_discovered": 1, 00:34:00.747 "num_base_bdevs_operational": 1, 00:34:00.747 "base_bdevs_list": [ 00:34:00.747 { 00:34:00.747 "name": null, 00:34:00.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:00.747 "is_configured": false, 00:34:00.747 "data_offset": 256, 00:34:00.747 "data_size": 7936 00:34:00.747 }, 00:34:00.747 { 00:34:00.747 "name": "BaseBdev2", 00:34:00.747 "uuid": "84d53eb3-ee0a-5a45-a383-a7b1fffca23d", 00:34:00.747 "is_configured": true, 00:34:00.747 "data_offset": 256, 00:34:00.747 "data_size": 7936 00:34:00.747 } 00:34:00.747 ] 00:34:00.747 }' 00:34:00.747 04:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:00.747 04:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:01.314 04:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:34:01.573 [2024-07-23 04:29:10.240589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:01.573 [2024-07-23 04:29:10.240656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:01.573 [2024-07-23 04:29:10.240686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043280 00:34:01.573 [2024-07-23 04:29:10.240705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:01.573 [2024-07-23 04:29:10.240970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:01.573 [2024-07-23 04:29:10.240992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:01.573 [2024-07-23 04:29:10.241063] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:01.573 [2024-07-23 04:29:10.241083] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:01.573 [2024-07-23 04:29:10.241099] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:01.573 [2024-07-23 04:29:10.241128] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:01.573 [2024-07-23 04:29:10.265859] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010a50 00:34:01.573 spare 00:34:01.573 [2024-07-23 04:29:10.268184] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:01.573 04:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:34:02.509 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:02.509 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:02.509 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:02.509 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:02.509 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:02.509 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:02.509 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:02.768 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:02.768 "name": "raid_bdev1", 00:34:02.768 "uuid": "956b10cb-da88-4337-bae4-51621ee8c4c2", 00:34:02.768 "strip_size_kb": 0, 00:34:02.768 "state": "online", 00:34:02.768 "raid_level": "raid1", 00:34:02.768 "superblock": true, 00:34:02.768 "num_base_bdevs": 2, 00:34:02.768 "num_base_bdevs_discovered": 2, 00:34:02.768 "num_base_bdevs_operational": 2, 00:34:02.768 "process": { 00:34:02.768 "type": "rebuild", 00:34:02.768 "target": "spare", 00:34:02.768 "progress": { 00:34:02.768 "blocks": 2816, 00:34:02.768 "percent": 35 00:34:02.768 } 00:34:02.768 }, 00:34:02.768 "base_bdevs_list": [ 00:34:02.768 { 00:34:02.768 "name": "spare", 00:34:02.768 "uuid": "7ef6dee1-990b-56e9-aa23-96c1596c032c", 00:34:02.768 "is_configured": true, 00:34:02.768 "data_offset": 256, 00:34:02.768 "data_size": 7936 00:34:02.768 }, 00:34:02.768 { 00:34:02.768 "name": "BaseBdev2", 00:34:02.768 "uuid": "84d53eb3-ee0a-5a45-a383-a7b1fffca23d", 00:34:02.768 "is_configured": true, 00:34:02.768 "data_offset": 256, 00:34:02.768 "data_size": 7936 00:34:02.768 } 00:34:02.768 ] 00:34:02.768 }' 00:34:02.768 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:02.768 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:02.768 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:03.027 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:03.027 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:34:03.027 [2024-07-23 04:29:11.752970] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:03.027 [2024-07-23 04:29:11.780461] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:03.027 [2024-07-23 04:29:11.780520] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:03.027 [2024-07-23 04:29:11.780547] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:03.027 [2024-07-23 04:29:11.780559] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:03.286 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:03.286 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:03.286 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:03.286 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:03.286 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:03.286 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:03.286 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:03.286 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:03.286 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:03.286 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:03.286 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:03.286 04:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:03.545 04:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:03.545 "name": "raid_bdev1", 00:34:03.545 "uuid": "956b10cb-da88-4337-bae4-51621ee8c4c2", 00:34:03.545 "strip_size_kb": 0, 00:34:03.545 "state": "online", 00:34:03.545 "raid_level": "raid1", 00:34:03.545 "superblock": true, 00:34:03.545 "num_base_bdevs": 2, 00:34:03.545 "num_base_bdevs_discovered": 1, 00:34:03.545 "num_base_bdevs_operational": 1, 00:34:03.545 "base_bdevs_list": [ 00:34:03.545 { 00:34:03.545 "name": null, 00:34:03.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:03.545 "is_configured": false, 00:34:03.545 "data_offset": 256, 00:34:03.545 "data_size": 7936 00:34:03.545 }, 00:34:03.545 { 00:34:03.545 "name": "BaseBdev2", 00:34:03.545 "uuid": "84d53eb3-ee0a-5a45-a383-a7b1fffca23d", 00:34:03.545 "is_configured": true, 00:34:03.545 "data_offset": 256, 00:34:03.545 "data_size": 7936 00:34:03.545 } 00:34:03.545 ] 00:34:03.545 }' 00:34:03.545 04:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:03.545 04:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:04.112 04:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:04.112 04:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:04.112 04:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:04.112 04:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:04.113 04:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:04.113 04:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:04.113 04:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:04.113 04:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:04.113 "name": "raid_bdev1", 00:34:04.113 "uuid": "956b10cb-da88-4337-bae4-51621ee8c4c2", 00:34:04.113 "strip_size_kb": 0, 00:34:04.113 "state": "online", 00:34:04.113 "raid_level": "raid1", 00:34:04.113 "superblock": true, 00:34:04.113 "num_base_bdevs": 2, 00:34:04.113 "num_base_bdevs_discovered": 1, 00:34:04.113 "num_base_bdevs_operational": 1, 00:34:04.113 "base_bdevs_list": [ 00:34:04.113 { 00:34:04.113 "name": null, 00:34:04.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:04.113 "is_configured": false, 00:34:04.113 "data_offset": 256, 00:34:04.113 "data_size": 7936 00:34:04.113 }, 00:34:04.113 { 00:34:04.113 "name": "BaseBdev2", 00:34:04.113 "uuid": "84d53eb3-ee0a-5a45-a383-a7b1fffca23d", 00:34:04.113 "is_configured": true, 00:34:04.113 "data_offset": 256, 00:34:04.113 "data_size": 7936 00:34:04.113 } 00:34:04.113 ] 00:34:04.113 }' 00:34:04.113 04:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:04.372 04:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:04.372 04:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:04.372 04:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:04.372 04:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:34:04.631 04:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:04.631 [2024-07-23 04:29:13.390206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:04.631 [2024-07-23 04:29:13.390267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:04.631 [2024-07-23 04:29:13.390299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043880 00:34:04.631 [2024-07-23 04:29:13.390316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:04.631 [2024-07-23 04:29:13.390557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:04.631 [2024-07-23 04:29:13.390576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:04.631 [2024-07-23 04:29:13.390639] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:34:04.631 [2024-07-23 04:29:13.390656] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:04.631 [2024-07-23 04:29:13.390672] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:04.631 BaseBdev1 00:34:04.631 04:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:34:06.008 04:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:06.008 04:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:06.008 04:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:06.008 04:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:06.008 04:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:06.008 04:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:06.008 04:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:06.008 04:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:06.008 04:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:06.008 04:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:06.008 04:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:06.008 04:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:06.008 04:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:06.008 "name": "raid_bdev1", 00:34:06.008 "uuid": "956b10cb-da88-4337-bae4-51621ee8c4c2", 00:34:06.008 "strip_size_kb": 0, 00:34:06.008 "state": "online", 00:34:06.008 "raid_level": "raid1", 00:34:06.008 "superblock": true, 00:34:06.008 "num_base_bdevs": 2, 00:34:06.008 "num_base_bdevs_discovered": 1, 00:34:06.008 "num_base_bdevs_operational": 1, 00:34:06.008 "base_bdevs_list": [ 00:34:06.008 { 00:34:06.008 "name": null, 00:34:06.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:06.008 "is_configured": false, 00:34:06.008 "data_offset": 256, 00:34:06.008 "data_size": 7936 00:34:06.008 }, 00:34:06.008 { 00:34:06.008 "name": "BaseBdev2", 00:34:06.008 "uuid": "84d53eb3-ee0a-5a45-a383-a7b1fffca23d", 00:34:06.008 "is_configured": true, 00:34:06.008 "data_offset": 256, 00:34:06.008 "data_size": 7936 00:34:06.008 } 00:34:06.008 ] 00:34:06.008 }' 00:34:06.008 04:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:06.008 04:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:06.575 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:06.575 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:06.575 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:06.575 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:06.576 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:06.576 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:06.576 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:06.576 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:06.576 "name": "raid_bdev1", 00:34:06.576 "uuid": "956b10cb-da88-4337-bae4-51621ee8c4c2", 00:34:06.576 "strip_size_kb": 0, 00:34:06.576 "state": "online", 00:34:06.576 "raid_level": "raid1", 00:34:06.576 "superblock": true, 00:34:06.576 "num_base_bdevs": 2, 00:34:06.576 "num_base_bdevs_discovered": 1, 00:34:06.576 "num_base_bdevs_operational": 1, 00:34:06.576 "base_bdevs_list": [ 00:34:06.576 { 00:34:06.576 "name": null, 00:34:06.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:06.576 "is_configured": false, 00:34:06.576 "data_offset": 256, 00:34:06.576 "data_size": 7936 00:34:06.576 }, 00:34:06.576 { 00:34:06.576 "name": "BaseBdev2", 00:34:06.576 "uuid": "84d53eb3-ee0a-5a45-a383-a7b1fffca23d", 00:34:06.576 "is_configured": true, 00:34:06.576 "data_offset": 256, 00:34:06.576 "data_size": 7936 00:34:06.576 } 00:34:06.576 ] 00:34:06.576 }' 00:34:06.576 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:06.576 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:06.576 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:06.835 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:06.835 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:06.835 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:34:06.835 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:06.835 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:34:06.835 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:06.835 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:34:06.835 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:06.835 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:34:06.835 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:06.835 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:34:06.835 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:34:06.835 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:06.835 [2024-07-23 04:29:15.556115] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:06.835 [2024-07-23 04:29:15.556289] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:06.835 [2024-07-23 04:29:15.556310] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:06.835 request: 00:34:06.835 { 00:34:06.835 "base_bdev": "BaseBdev1", 00:34:06.835 "raid_bdev": "raid_bdev1", 00:34:06.835 "method": "bdev_raid_add_base_bdev", 00:34:06.835 "req_id": 1 00:34:06.835 } 00:34:06.835 Got JSON-RPC error response 00:34:06.835 response: 00:34:06.835 { 00:34:06.835 "code": -22, 00:34:06.835 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:34:06.835 } 00:34:06.835 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:34:06.835 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:06.835 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:06.835 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:06.835 04:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:34:08.214 04:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:08.214 04:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:08.214 04:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:08.214 04:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:08.214 04:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:08.214 04:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:08.214 04:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:08.214 04:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:08.214 04:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:08.214 04:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:08.214 04:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:08.214 04:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:08.214 04:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:08.214 "name": "raid_bdev1", 00:34:08.214 "uuid": "956b10cb-da88-4337-bae4-51621ee8c4c2", 00:34:08.214 "strip_size_kb": 0, 00:34:08.214 "state": "online", 00:34:08.214 "raid_level": "raid1", 00:34:08.214 "superblock": true, 00:34:08.214 "num_base_bdevs": 2, 00:34:08.214 "num_base_bdevs_discovered": 1, 00:34:08.214 "num_base_bdevs_operational": 1, 00:34:08.214 "base_bdevs_list": [ 00:34:08.214 { 00:34:08.214 "name": null, 00:34:08.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:08.214 "is_configured": false, 00:34:08.214 "data_offset": 256, 00:34:08.214 "data_size": 7936 00:34:08.214 }, 00:34:08.214 { 00:34:08.214 "name": "BaseBdev2", 00:34:08.214 "uuid": "84d53eb3-ee0a-5a45-a383-a7b1fffca23d", 00:34:08.214 "is_configured": true, 00:34:08.214 "data_offset": 256, 00:34:08.214 "data_size": 7936 00:34:08.214 } 00:34:08.214 ] 00:34:08.214 }' 00:34:08.214 04:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:08.214 04:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:08.782 04:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:08.782 04:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:08.782 04:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:08.782 04:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:08.782 04:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:08.782 04:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:08.782 04:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:08.782 04:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:08.782 "name": "raid_bdev1", 00:34:08.782 "uuid": "956b10cb-da88-4337-bae4-51621ee8c4c2", 00:34:08.782 "strip_size_kb": 0, 00:34:08.782 "state": "online", 00:34:08.782 "raid_level": "raid1", 00:34:08.782 "superblock": true, 00:34:08.782 "num_base_bdevs": 2, 00:34:08.782 "num_base_bdevs_discovered": 1, 00:34:08.783 "num_base_bdevs_operational": 1, 00:34:08.783 "base_bdevs_list": [ 00:34:08.783 { 00:34:08.783 "name": null, 00:34:08.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:08.783 "is_configured": false, 00:34:08.783 "data_offset": 256, 00:34:08.783 "data_size": 7936 00:34:08.783 }, 00:34:08.783 { 00:34:08.783 "name": "BaseBdev2", 00:34:08.783 "uuid": "84d53eb3-ee0a-5a45-a383-a7b1fffca23d", 00:34:08.783 "is_configured": true, 00:34:08.783 "data_offset": 256, 00:34:08.783 "data_size": 7936 00:34:08.783 } 00:34:08.783 ] 00:34:08.783 }' 00:34:08.783 04:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:09.042 04:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:09.042 04:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:09.042 04:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:09.042 04:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 2830808 00:34:09.042 04:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 2830808 ']' 00:34:09.042 04:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 2830808 00:34:09.042 04:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:34:09.042 04:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:09.042 04:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2830808 00:34:09.042 04:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:09.042 04:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:09.042 04:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2830808' 00:34:09.042 killing process with pid 2830808 00:34:09.042 04:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 2830808 00:34:09.042 Received shutdown signal, test time was about 60.000000 seconds 00:34:09.042 00:34:09.042 Latency(us) 00:34:09.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.042 =================================================================================================================== 00:34:09.042 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:09.043 [2024-07-23 04:29:17.669279] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:09.043 [2024-07-23 04:29:17.669411] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:09.043 04:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 2830808 00:34:09.043 [2024-07-23 04:29:17.669473] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:09.043 [2024-07-23 04:29:17.669490] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042c80 name raid_bdev1, state offline 00:34:09.302 [2024-07-23 04:29:17.986488] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:11.210 04:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:34:11.210 00:34:11.210 real 0m29.859s 00:34:11.210 user 0m45.388s 00:34:11.210 sys 0m3.888s 00:34:11.210 04:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:11.210 04:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:11.210 ************************************ 00:34:11.210 END TEST raid_rebuild_test_sb_md_interleaved 00:34:11.210 ************************************ 00:34:11.210 04:29:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:34:11.210 04:29:19 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:34:11.210 04:29:19 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:34:11.210 04:29:19 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 2830808 ']' 00:34:11.210 04:29:19 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 2830808 00:34:11.210 04:29:19 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:34:11.210 00:34:11.210 real 20m15.630s 00:34:11.210 user 32m18.514s 00:34:11.210 sys 3m25.102s 00:34:11.210 04:29:19 bdev_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:11.210 04:29:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:11.210 ************************************ 00:34:11.210 END TEST bdev_raid 00:34:11.210 ************************************ 00:34:11.210 04:29:19 -- common/autotest_common.sh@1142 -- # return 0 00:34:11.210 04:29:19 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:34:11.210 04:29:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:11.210 04:29:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:11.210 04:29:19 -- common/autotest_common.sh@10 -- # set +x 00:34:11.210 ************************************ 00:34:11.210 START TEST bdevperf_config 00:34:11.210 ************************************ 00:34:11.210 04:29:19 bdevperf_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:34:11.210 * Looking for test storage... 00:34:11.210 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf 00:34:11.210 04:29:19 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/common.sh 00:34:11.210 04:29:19 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf 00:34:11.210 04:29:19 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:11.211 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:11.211 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:11.211 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:11.211 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:11.211 00:34:11.211 04:29:19 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:11.470 04:29:19 bdevperf_config -- bdevperf/test_config.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:34:16.775 04:29:25 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-23 04:29:20.098435] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:16.775 [2024-07-23 04:29:20.098555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2836253 ] 00:34:16.775 Using job config with 4 jobs 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:16.775 [2024-07-23 04:29:20.343450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.775 [2024-07-23 04:29:20.646874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.775 cpumask for '\''job0'\'' is too big 00:34:16.775 cpumask for '\''job1'\'' is too big 00:34:16.775 cpumask for '\''job2'\'' is too big 00:34:16.775 cpumask for '\''job3'\'' is too big 00:34:16.775 Running I/O for 2 seconds... 00:34:16.775 00:34:16.775 Latency(us) 00:34:16.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:16.775 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.775 Malloc0 : 2.02 23417.43 22.87 0.00 0.00 10919.77 1966.08 17196.65 00:34:16.775 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.775 Malloc0 : 2.02 23396.41 22.85 0.00 0.00 10904.43 1926.76 15204.35 00:34:16.775 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.775 Malloc0 : 2.03 23375.59 22.83 0.00 0.00 10887.51 1992.29 13212.06 00:34:16.775 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.775 Malloc0 : 2.03 23354.75 22.81 0.00 0.00 10871.26 1952.97 11272.19 00:34:16.775 =================================================================================================================== 00:34:16.775 Total : 93544.19 91.35 0.00 0.00 10895.74 1926.76 17196.65' 00:34:16.775 04:29:25 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-23 04:29:20.098435] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:16.775 [2024-07-23 04:29:20.098555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2836253 ] 00:34:16.775 Using job config with 4 jobs 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.775 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:16.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:16.776 [2024-07-23 04:29:20.343450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.776 [2024-07-23 04:29:20.646874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.776 cpumask for '\''job0'\'' is too big 00:34:16.776 cpumask for '\''job1'\'' is too big 00:34:16.776 cpumask for '\''job2'\'' is too big 00:34:16.776 cpumask for '\''job3'\'' is too big 00:34:16.776 Running I/O for 2 seconds... 00:34:16.776 00:34:16.776 Latency(us) 00:34:16.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:16.776 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.776 Malloc0 : 2.02 23417.43 22.87 0.00 0.00 10919.77 1966.08 17196.65 00:34:16.776 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.776 Malloc0 : 2.02 23396.41 22.85 0.00 0.00 10904.43 1926.76 15204.35 00:34:16.776 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.776 Malloc0 : 2.03 23375.59 22.83 0.00 0.00 10887.51 1992.29 13212.06 00:34:16.776 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.776 Malloc0 : 2.03 23354.75 22.81 0.00 0.00 10871.26 1952.97 11272.19 00:34:16.776 =================================================================================================================== 00:34:16.776 Total : 93544.19 91.35 0.00 0.00 10895.74 1926.76 17196.65' 00:34:16.776 04:29:25 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-23 04:29:20.098435] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:16.776 [2024-07-23 04:29:20.098555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2836253 ] 00:34:16.776 Using job config with 4 jobs 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:16.776 qat_pci_device_allocate(): Reached maximum numb 04:29:25 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:34:16.776 er of QAT devices 00:34:16.776 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:16.776 [2024-07-23 04:29:20.343450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.776 [2024-07-23 04:29:20.646874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.776 cpumask for '\''job0'\'' is too big 00:34:16.776 cpumask for '\''job1'\'' is too big 00:34:16.776 cpumask for '\''job2'\'' is too big 00:34:16.776 cpumask for '\''job3'\'' is too big 00:34:16.776 Running I/O for 2 seconds... 00:34:16.776 00:34:16.776 Latency(us) 00:34:16.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:16.776 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.776 Malloc0 : 2.02 23417.43 22.87 0.00 0.00 10919.77 1966.08 17196.65 00:34:16.776 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.776 Malloc0 : 2.02 23396.41 22.85 0.00 0.00 10904.43 1926.76 15204.35 00:34:16.776 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.777 Malloc0 : 2.03 23375.59 22.83 0.00 0.00 10887.51 1992.29 13212.06 00:34:16.777 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.777 Malloc0 : 2.03 23354.75 22.81 0.00 0.00 10871.26 1952.97 11272.19 00:34:16.777 =================================================================================================================== 00:34:16.777 Total : 93544.19 91.35 0.00 0.00 10895.74 1926.76 17196.65' 00:34:16.777 04:29:25 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:34:16.777 04:29:25 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:34:16.777 04:29:25 bdevperf_config -- bdevperf/test_config.sh@25 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -C -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:34:16.777 [2024-07-23 04:29:25.509880] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:16.777 [2024-07-23 04:29:25.510010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2837116 ] 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:17.036 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:17.037 [2024-07-23 04:29:25.757451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.295 [2024-07-23 04:29:26.066048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.231 cpumask for 'job0' is too big 00:34:18.231 cpumask for 'job1' is too big 00:34:18.231 cpumask for 'job2' is too big 00:34:18.231 cpumask for 'job3' is too big 00:34:22.418 04:29:30 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:34:22.418 Running I/O for 2 seconds... 00:34:22.418 00:34:22.418 Latency(us) 00:34:22.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:22.418 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:22.418 Malloc0 : 2.02 23364.70 22.82 0.00 0.00 10945.49 1979.19 17196.65 00:34:22.418 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:22.418 Malloc0 : 2.02 23343.53 22.80 0.00 0.00 10930.22 1926.76 15204.35 00:34:22.418 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:22.418 Malloc0 : 2.02 23322.49 22.78 0.00 0.00 10915.06 1939.87 13264.49 00:34:22.418 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:22.418 Malloc0 : 2.02 23395.66 22.85 0.00 0.00 10853.85 943.72 11272.19 00:34:22.418 =================================================================================================================== 00:34:22.418 Total : 93426.38 91.24 0.00 0.00 10911.08 943.72 17196.65' 00:34:22.418 04:29:30 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:34:22.418 04:29:30 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:34:22.418 04:29:30 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:34:22.418 04:29:30 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:34:22.418 04:29:30 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:34:22.418 04:29:30 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:34:22.418 04:29:30 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:34:22.418 04:29:30 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:34:22.418 04:29:30 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:22.418 00:34:22.418 04:29:30 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:22.418 04:29:30 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:34:22.418 04:29:30 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:34:22.418 04:29:30 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:34:22.418 04:29:30 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:34:22.418 04:29:30 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:34:22.418 04:29:30 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:34:22.418 04:29:30 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:22.418 00:34:22.418 04:29:30 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:22.418 04:29:30 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:34:22.419 04:29:30 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:34:22.419 04:29:30 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:34:22.419 04:29:30 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:34:22.419 04:29:30 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:34:22.419 04:29:30 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:34:22.419 04:29:30 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:22.419 00:34:22.419 04:29:30 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:22.419 04:29:30 bdevperf_config -- bdevperf/test_config.sh@32 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:34:27.690 04:29:36 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-23 04:29:30.814166] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:27.690 [2024-07-23 04:29:30.814284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838095 ] 00:34:27.691 Using job config with 3 jobs 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:27.691 [2024-07-23 04:29:31.057955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:27.691 [2024-07-23 04:29:31.362502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.691 cpumask for '\''job0'\'' is too big 00:34:27.691 cpumask for '\''job1'\'' is too big 00:34:27.691 cpumask for '\''job2'\'' is too big 00:34:27.691 Running I/O for 2 seconds... 00:34:27.691 00:34:27.691 Latency(us) 00:34:27.691 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:27.691 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:27.691 Malloc0 : 2.01 31579.83 30.84 0.00 0.00 8096.17 1913.65 12111.05 00:34:27.691 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:27.691 Malloc0 : 2.02 31593.13 30.85 0.00 0.00 8073.66 1887.44 10171.19 00:34:27.691 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:27.691 Malloc0 : 2.02 31564.81 30.83 0.00 0.00 8062.45 1913.65 8441.04 00:34:27.691 =================================================================================================================== 00:34:27.691 Total : 94737.77 92.52 0.00 0.00 8077.40 1887.44 12111.05' 00:34:27.691 04:29:36 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-23 04:29:30.814166] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:27.691 [2024-07-23 04:29:30.814284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838095 ] 00:34:27.691 Using job config with 3 jobs 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:27.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.691 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:27.691 [2024-07-23 04:29:31.057955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:27.691 [2024-07-23 04:29:31.362502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.691 cpumask for '\''job0'\'' is too big 00:34:27.691 cpumask for '\''job1'\'' is too big 00:34:27.691 cpumask for '\''job2'\'' is too big 00:34:27.691 Running I/O for 2 seconds... 00:34:27.691 00:34:27.692 Latency(us) 00:34:27.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:27.692 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:27.692 Malloc0 : 2.01 31579.83 30.84 0.00 0.00 8096.17 1913.65 12111.05 00:34:27.692 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:27.692 Malloc0 : 2.02 31593.13 30.85 0.00 0.00 8073.66 1887.44 10171.19 00:34:27.692 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:27.692 Malloc0 : 2.02 31564.81 30.83 0.00 0.00 8062.45 1913.65 8441.04 00:34:27.692 =================================================================================================================== 00:34:27.692 Total : 94737.77 92.52 0.00 0.00 8077.40 1887.44 12111.05' 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-23 04:29:30.814166] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:27.692 [2024-07-23 04:29:30.814284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838095 ] 00:34:27.692 Using job config with 3 jobs 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:27.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.692 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:27.692 [2024-07-23 04:29:31.057955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:27.692 [2024-07-23 04:29:31.362502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.692 cpumask for '\''job0'\'' is too big 00:34:27.692 cpumask for '\''job1'\'' is too big 00:34:27.692 cpumask for '\''job2'\'' is too big 00:34:27.692 Running I/O for 2 seconds... 00:34:27.692 00:34:27.692 Latency(us) 00:34:27.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:27.692 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:27.692 Malloc0 : 2.01 31579.83 30.84 0.00 0.00 8096.17 1913.65 12111.05 00:34:27.692 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:27.692 Malloc0 : 2.02 31593.13 30.85 0.00 0.00 8073.66 1887.44 10171.19 00:34:27.692 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:27.692 Malloc0 : 2.02 31564.81 30.83 0.00 0.00 8062.45 1913.65 8441.04 00:34:27.692 =================================================================================================================== 00:34:27.692 Total : 94737.77 92.52 0.00 0.00 8077.40 1887.44 12111.05' 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:27.692 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:27.692 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:27.692 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:27.692 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:27.692 00:34:27.692 04:29:36 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:27.693 04:29:36 bdevperf_config -- bdevperf/test_config.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:34:32.970 04:29:41 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-23 04:29:36.242007] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:32.970 [2024-07-23 04:29:36.242126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838905 ] 00:34:32.970 Using job config with 4 jobs 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:32.970 [2024-07-23 04:29:36.490203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.970 [2024-07-23 04:29:36.801945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.970 cpumask for '\''job0'\'' is too big 00:34:32.970 cpumask for '\''job1'\'' is too big 00:34:32.970 cpumask for '\''job2'\'' is too big 00:34:32.970 cpumask for '\''job3'\'' is too big 00:34:32.970 Running I/O for 2 seconds... 00:34:32.970 00:34:32.970 Latency(us) 00:34:32.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:32.970 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.970 Malloc0 : 2.03 11730.18 11.46 0.00 0.00 21798.10 4063.23 34393.29 00:34:32.970 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.970 Malloc1 : 2.03 11719.07 11.44 0.00 0.00 21799.21 4902.09 34393.29 00:34:32.970 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.970 Malloc0 : 2.03 11708.64 11.43 0.00 0.00 21735.70 4010.80 30408.70 00:34:32.970 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.970 Malloc1 : 2.04 11697.70 11.42 0.00 0.00 21735.13 4823.45 30198.99 00:34:32.970 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.970 Malloc0 : 2.05 11749.42 11.47 0.00 0.00 21560.14 4010.80 26109.54 00:34:32.970 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.970 Malloc1 : 2.05 11738.46 11.46 0.00 0.00 21558.32 4849.66 26004.68 00:34:32.970 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.970 Malloc0 : 2.05 11728.09 11.45 0.00 0.00 21497.01 3984.59 22229.81 00:34:32.970 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.970 Malloc1 : 2.05 11717.19 11.44 0.00 0.00 21493.55 4771.02 22229.81 00:34:32.970 =================================================================================================================== 00:34:32.970 Total : 93788.73 91.59 0.00 0.00 21646.50 3984.59 34393.29' 00:34:32.970 04:29:41 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-23 04:29:36.242007] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:32.970 [2024-07-23 04:29:36.242126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838905 ] 00:34:32.970 Using job config with 4 jobs 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.970 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:32.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:32.971 [2024-07-23 04:29:36.490203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.971 [2024-07-23 04:29:36.801945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.971 cpumask for '\''job0'\'' is too big 00:34:32.971 cpumask for '\''job1'\'' is too big 00:34:32.971 cpumask for '\''job2'\'' is too big 00:34:32.971 cpumask for '\''job3'\'' is too big 00:34:32.971 Running I/O for 2 seconds... 00:34:32.971 00:34:32.971 Latency(us) 00:34:32.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:32.971 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.971 Malloc0 : 2.03 11730.18 11.46 0.00 0.00 21798.10 4063.23 34393.29 00:34:32.971 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.971 Malloc1 : 2.03 11719.07 11.44 0.00 0.00 21799.21 4902.09 34393.29 00:34:32.971 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.971 Malloc0 : 2.03 11708.64 11.43 0.00 0.00 21735.70 4010.80 30408.70 00:34:32.971 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.971 Malloc1 : 2.04 11697.70 11.42 0.00 0.00 21735.13 4823.45 30198.99 00:34:32.971 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.971 Malloc0 : 2.05 11749.42 11.47 0.00 0.00 21560.14 4010.80 26109.54 00:34:32.971 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.971 Malloc1 : 2.05 11738.46 11.46 0.00 0.00 21558.32 4849.66 26004.68 00:34:32.971 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.971 Malloc0 : 2.05 11728.09 11.45 0.00 0.00 21497.01 3984.59 22229.81 00:34:32.971 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.971 Malloc1 : 2.05 11717.19 11.44 0.00 0.00 21493.55 4771.02 22229.81 00:34:32.971 =================================================================================================================== 00:34:32.971 Total : 93788.73 91.59 0.00 0.00 21646.50 3984.59 34393.29' 00:34:32.971 04:29:41 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:34:32.971 04:29:41 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-23 04:29:36.242007] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:32.971 [2024-07-23 04:29:36.242126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838905 ] 00:34:32.971 Using job config with 4 jobs 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:32.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.971 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:32.971 [2024-07-23 04:29:36.490203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.971 [2024-07-23 04:29:36.801945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.971 cpumask for '\''job0'\'' is too big 00:34:32.971 cpumask for '\''job1'\'' is too big 00:34:32.971 cpumask for '\''job2'\'' is too big 00:34:32.971 cpumask for '\''job3'\'' is too big 00:34:32.971 Running I/O for 2 seconds... 00:34:32.971 00:34:32.971 Latency(us) 00:34:32.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:32.972 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.972 Malloc0 : 2.03 11730.18 11.46 0.00 0.00 21798.10 4063.23 34393.29 00:34:32.972 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.972 Malloc1 : 2.03 11719.07 11.44 0.00 0.00 21799.21 4902.09 34393.29 00:34:32.972 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.972 Malloc0 : 2.03 11708.64 11.43 0.00 0.00 21735.70 4010.80 30408.70 00:34:32.972 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.972 Malloc1 : 2.04 11697.70 11.42 0.00 0.00 21735.13 4823.45 30198.99 00:34:32.972 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.972 Malloc0 : 2.05 11749.42 11.47 0.00 0.00 21560.14 4010.80 26109.54 00:34:32.972 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.972 Malloc1 : 2.05 11738.46 11.46 0.00 0.00 21558.32 4849.66 26004.68 00:34:32.972 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.972 Malloc0 : 2.05 11728.09 11.45 0.00 0.00 21497.01 3984.59 22229.81 00:34:32.972 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.972 Malloc1 : 2.05 11717.19 11.44 0.00 0.00 21493.55 4771.02 22229.81 00:34:32.972 =================================================================================================================== 00:34:32.972 Total : 93788.73 91.59 0.00 0.00 21646.50 3984.59 34393.29' 00:34:32.972 04:29:41 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:34:32.972 04:29:41 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:34:32.972 04:29:41 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:34:32.972 04:29:41 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:34:32.972 04:29:41 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:34:32.972 00:34:32.972 real 0m21.815s 00:34:32.972 user 0m19.830s 00:34:32.972 sys 0m1.780s 00:34:32.972 04:29:41 bdevperf_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:32.972 04:29:41 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:34:32.972 ************************************ 00:34:32.972 END TEST bdevperf_config 00:34:32.972 ************************************ 00:34:32.972 04:29:41 -- common/autotest_common.sh@1142 -- # return 0 00:34:32.972 04:29:41 -- spdk/autotest.sh@192 -- # uname -s 00:34:32.972 04:29:41 -- spdk/autotest.sh@192 -- # [[ Linux == Linux ]] 00:34:32.972 04:29:41 -- spdk/autotest.sh@193 -- # run_test reactor_set_interrupt /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:34:32.972 04:29:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:32.972 04:29:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:32.972 04:29:41 -- common/autotest_common.sh@10 -- # set +x 00:34:33.233 ************************************ 00:34:33.233 START TEST reactor_set_interrupt 00:34:33.233 ************************************ 00:34:33.233 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:34:33.233 * Looking for test storage... 00:34:33.233 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:33.234 04:29:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:34:33.234 04:29:41 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:34:33.234 04:29:41 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:33.234 04:29:41 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:33.234 04:29:41 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:34:33.234 04:29:41 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:33.234 04:29:41 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:34:33.234 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:34:33.234 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:34:33.234 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:34:33.234 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:34:33.234 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:34:33.234 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:34:33.234 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:34:33.234 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:34:33.234 04:29:41 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:34:33.234 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:34:33.234 04:29:41 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:34:33.234 04:29:41 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:34:33.234 04:29:41 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:34:33.234 04:29:41 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:33.234 04:29:41 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:34:33.234 04:29:41 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:34:33.234 04:29:41 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:34:33.234 04:29:41 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:34:33.234 04:29:41 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:34:33.234 04:29:41 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:34:33.234 04:29:41 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:34:33.234 04:29:41 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:34:33.234 04:29:41 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:34:33.234 04:29:41 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:34:33.234 04:29:41 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:34:33.235 #define SPDK_CONFIG_H 00:34:33.235 #define SPDK_CONFIG_APPS 1 00:34:33.235 #define SPDK_CONFIG_ARCH native 00:34:33.235 #define SPDK_CONFIG_ASAN 1 00:34:33.235 #undef SPDK_CONFIG_AVAHI 00:34:33.235 #undef SPDK_CONFIG_CET 00:34:33.235 #define SPDK_CONFIG_COVERAGE 1 00:34:33.235 #define SPDK_CONFIG_CROSS_PREFIX 00:34:33.235 #define SPDK_CONFIG_CRYPTO 1 00:34:33.235 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:34:33.235 #undef SPDK_CONFIG_CUSTOMOCF 00:34:33.235 #undef SPDK_CONFIG_DAOS 00:34:33.235 #define SPDK_CONFIG_DAOS_DIR 00:34:33.235 #define SPDK_CONFIG_DEBUG 1 00:34:33.235 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:34:33.235 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:34:33.235 #define SPDK_CONFIG_DPDK_INC_DIR 00:34:33.235 #define SPDK_CONFIG_DPDK_LIB_DIR 00:34:33.235 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:34:33.235 #undef SPDK_CONFIG_DPDK_UADK 00:34:33.235 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:34:33.235 #define SPDK_CONFIG_EXAMPLES 1 00:34:33.235 #undef SPDK_CONFIG_FC 00:34:33.235 #define SPDK_CONFIG_FC_PATH 00:34:33.235 #define SPDK_CONFIG_FIO_PLUGIN 1 00:34:33.235 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:34:33.235 #undef SPDK_CONFIG_FUSE 00:34:33.235 #undef SPDK_CONFIG_FUZZER 00:34:33.235 #define SPDK_CONFIG_FUZZER_LIB 00:34:33.235 #undef SPDK_CONFIG_GOLANG 00:34:33.235 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:34:33.235 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:34:33.235 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:34:33.235 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:34:33.235 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:34:33.235 #undef SPDK_CONFIG_HAVE_LIBBSD 00:34:33.235 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:34:33.235 #define SPDK_CONFIG_IDXD 1 00:34:33.235 #define SPDK_CONFIG_IDXD_KERNEL 1 00:34:33.235 #define SPDK_CONFIG_IPSEC_MB 1 00:34:33.235 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:34:33.235 #define SPDK_CONFIG_ISAL 1 00:34:33.235 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:34:33.235 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:34:33.235 #define SPDK_CONFIG_LIBDIR 00:34:33.235 #undef SPDK_CONFIG_LTO 00:34:33.235 #define SPDK_CONFIG_MAX_LCORES 128 00:34:33.235 #define SPDK_CONFIG_NVME_CUSE 1 00:34:33.235 #undef SPDK_CONFIG_OCF 00:34:33.235 #define SPDK_CONFIG_OCF_PATH 00:34:33.235 #define SPDK_CONFIG_OPENSSL_PATH 00:34:33.235 #undef SPDK_CONFIG_PGO_CAPTURE 00:34:33.235 #define SPDK_CONFIG_PGO_DIR 00:34:33.235 #undef SPDK_CONFIG_PGO_USE 00:34:33.235 #define SPDK_CONFIG_PREFIX /usr/local 00:34:33.235 #undef SPDK_CONFIG_RAID5F 00:34:33.235 #undef SPDK_CONFIG_RBD 00:34:33.235 #define SPDK_CONFIG_RDMA 1 00:34:33.235 #define SPDK_CONFIG_RDMA_PROV verbs 00:34:33.235 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:34:33.235 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:34:33.235 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:34:33.235 #define SPDK_CONFIG_SHARED 1 00:34:33.235 #undef SPDK_CONFIG_SMA 00:34:33.235 #define SPDK_CONFIG_TESTS 1 00:34:33.235 #undef SPDK_CONFIG_TSAN 00:34:33.235 #define SPDK_CONFIG_UBLK 1 00:34:33.235 #define SPDK_CONFIG_UBSAN 1 00:34:33.235 #undef SPDK_CONFIG_UNIT_TESTS 00:34:33.235 #undef SPDK_CONFIG_URING 00:34:33.235 #define SPDK_CONFIG_URING_PATH 00:34:33.235 #undef SPDK_CONFIG_URING_ZNS 00:34:33.235 #undef SPDK_CONFIG_USDT 00:34:33.235 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:34:33.235 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:34:33.235 #undef SPDK_CONFIG_VFIO_USER 00:34:33.235 #define SPDK_CONFIG_VFIO_USER_DIR 00:34:33.235 #define SPDK_CONFIG_VHOST 1 00:34:33.235 #define SPDK_CONFIG_VIRTIO 1 00:34:33.235 #undef SPDK_CONFIG_VTUNE 00:34:33.235 #define SPDK_CONFIG_VTUNE_DIR 00:34:33.235 #define SPDK_CONFIG_WERROR 1 00:34:33.235 #define SPDK_CONFIG_WPDK_DIR 00:34:33.235 #undef SPDK_CONFIG_XNVME 00:34:33.235 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:34:33.235 04:29:41 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:34:33.235 04:29:41 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:33.235 04:29:41 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:33.235 04:29:41 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:33.235 04:29:41 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.235 04:29:41 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.235 04:29:41 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.235 04:29:41 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:34:33.235 04:29:41 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:34:33.235 04:29:41 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 1 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 0 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 1 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 0 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:34:33.235 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 1 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 1 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 1 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@167 -- # : 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:34:33.236 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@200 -- # cat 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@263 -- # export valgrind= 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@263 -- # valgrind= 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@269 -- # uname -s 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 1 -eq 1 ]] 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@276 -- # export HUGE_EVEN_ALLOC=yes 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@276 -- # HUGE_EVEN_ALLOC=yes 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@279 -- # MAKE=make 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@299 -- # TEST_MODE= 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@318 -- # [[ -z 2839980 ]] 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@318 -- # kill -0 2839980 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@331 -- # local mount target_dir 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:34:33.237 04:29:41 reactor_set_interrupt -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:34:33.237 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.tjfpjA 00:34:33.237 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:34:33.237 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:34:33.237 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:34:33.237 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.tjfpjA/tests/interrupt /tmp/spdk.tjfpjA 00:34:33.237 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:34:33.237 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:34:33.237 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@327 -- # df -T 00:34:33.237 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=954302464 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4330127360 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=54950191104 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742305280 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=6792114176 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=30866341888 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871150592 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4808704 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=12338671616 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348461056 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=9789440 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=30870290432 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871154688 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=864256 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:34:33.498 * Looking for test storage... 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@368 -- # local target_space new_size 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@372 -- # mount=/ 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@374 -- # target_space=54950191104 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@381 -- # new_size=9006706688 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:33.498 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@389 -- # return 0 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # set -o errtrace 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # true 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@1689 -- # xtrace_fd 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:34:33.498 04:29:42 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:34:33.498 04:29:42 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:34:33.498 04:29:42 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:34:33.498 04:29:42 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:34:33.498 04:29:42 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:34:33.498 04:29:42 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:34:33.498 04:29:42 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:34:33.498 04:29:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:34:33.498 04:29:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:34:33.498 04:29:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:34:33.498 04:29:42 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:33.498 04:29:42 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:34:33.498 04:29:42 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=2840025 00:34:33.498 04:29:42 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:33.498 04:29:42 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 2840025 /var/tmp/spdk.sock 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 2840025 ']' 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:33.498 04:29:42 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:33.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:33.498 04:29:42 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:33.498 [2024-07-23 04:29:42.126950] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:33.498 [2024-07-23 04:29:42.127074] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2840025 ] 00:34:33.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:33.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.499 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:33.759 [2024-07-23 04:29:42.354840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:34.018 [2024-07-23 04:29:42.640249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:34.018 [2024-07-23 04:29:42.640323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:34.018 [2024-07-23 04:29:42.640323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:34.587 [2024-07-23 04:29:43.119467] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:34.587 04:29:43 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:34.587 04:29:43 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:34:34.587 04:29:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:34:34.587 04:29:43 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:34:34.846 Malloc0 00:34:34.846 Malloc1 00:34:34.846 Malloc2 00:34:34.846 04:29:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:34:34.846 04:29:43 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:34:34.846 04:29:43 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:34.846 04:29:43 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:34:34.846 5000+0 records in 00:34:34.846 5000+0 records out 00:34:34.846 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0269488 s, 380 MB/s 00:34:34.846 04:29:43 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:34:35.104 AIO0 00:34:35.104 04:29:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 2840025 00:34:35.104 04:29:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 2840025 without_thd 00:34:35.104 04:29:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=2840025 00:34:35.104 04:29:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:34:35.104 04:29:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:34:35.104 04:29:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:34:35.104 04:29:43 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:34:35.104 04:29:43 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:34:35.104 04:29:43 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:34:35.104 04:29:43 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:35.104 04:29:43 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:34:35.104 04:29:43 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:35.363 04:29:44 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:34:35.363 04:29:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:34:35.363 04:29:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:34:35.363 04:29:44 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:34:35.363 04:29:44 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:34:35.363 04:29:44 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:34:35.363 04:29:44 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:35.363 04:29:44 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:34:35.363 04:29:44 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:35.622 04:29:44 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:34:35.622 04:29:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:34:35.622 04:29:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:34:35.622 spdk_thread ids are 1 on reactor0. 00:34:35.622 04:29:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:35.622 04:29:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2840025 0 00:34:35.622 04:29:44 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2840025 0 idle 00:34:35.622 04:29:44 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2840025 00:34:35.622 04:29:44 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:35.622 04:29:44 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:35.622 04:29:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:35.622 04:29:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:35.622 04:29:44 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:35.622 04:29:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:35.622 04:29:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:35.622 04:29:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2840025 -w 256 00:34:35.622 04:29:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2840025 root 20 0 20.1t 203392 34944 S 0.0 0.3 0:01.25 reactor_0' 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2840025 root 20 0 20.1t 203392 34944 S 0.0 0.3 0:01.25 reactor_0 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2840025 1 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2840025 1 idle 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2840025 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2840025 -w 256 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2840033 root 20 0 20.1t 203392 34944 S 0.0 0.3 0:00.00 reactor_1' 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2840033 root 20 0 20.1t 203392 34944 S 0.0 0.3 0:00.00 reactor_1 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2840025 2 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2840025 2 idle 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2840025 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:35.882 04:29:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:36.153 04:29:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2840025 -w 256 00:34:36.153 04:29:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:36.153 04:29:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2840034 root 20 0 20.1t 203392 34944 S 0.0 0.3 0:00.00 reactor_2' 00:34:36.153 04:29:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2840034 root 20 0 20.1t 203392 34944 S 0.0 0.3 0:00.00 reactor_2 00:34:36.153 04:29:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:36.153 04:29:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:36.153 04:29:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:36.153 04:29:44 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:36.153 04:29:44 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:36.153 04:29:44 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:36.153 04:29:44 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:36.153 04:29:44 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:36.153 04:29:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:34:36.153 04:29:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:34:36.153 04:29:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:34:36.436 [2024-07-23 04:29:45.057730] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:36.436 04:29:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:34:36.694 [2024-07-23 04:29:45.285306] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:34:36.694 [2024-07-23 04:29:45.289345] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:36.694 04:29:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:34:36.952 [2024-07-23 04:29:45.513154] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:34:36.952 [2024-07-23 04:29:45.513315] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2840025 0 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2840025 0 busy 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2840025 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2840025 -w 256 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2840025 root 20 0 20.1t 206080 34944 R 99.9 0.3 0:01.66 reactor_0' 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2840025 root 20 0 20.1t 206080 34944 R 99.9 0.3 0:01.66 reactor_0 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2840025 2 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2840025 2 busy 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2840025 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:36.952 04:29:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2840025 -w 256 00:34:37.210 04:29:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2840034 root 20 0 20.1t 206080 34944 R 99.9 0.3 0:00.36 reactor_2' 00:34:37.210 04:29:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2840034 root 20 0 20.1t 206080 34944 R 99.9 0.3 0:00.36 reactor_2 00:34:37.210 04:29:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:37.210 04:29:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:37.210 04:29:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:34:37.210 04:29:45 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:34:37.210 04:29:45 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:34:37.210 04:29:45 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:34:37.210 04:29:45 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:34:37.210 04:29:45 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:37.210 04:29:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:34:37.469 [2024-07-23 04:29:46.101166] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:34:37.469 [2024-07-23 04:29:46.101281] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:37.469 04:29:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:34:37.469 04:29:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 2840025 2 00:34:37.469 04:29:46 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2840025 2 idle 00:34:37.469 04:29:46 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2840025 00:34:37.469 04:29:46 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:37.469 04:29:46 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:37.469 04:29:46 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:37.469 04:29:46 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:37.469 04:29:46 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:37.469 04:29:46 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:37.469 04:29:46 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:37.469 04:29:46 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2840025 -w 256 00:34:37.469 04:29:46 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:37.727 04:29:46 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2840034 root 20 0 20.1t 206080 34944 S 0.0 0.3 0:00.58 reactor_2' 00:34:37.727 04:29:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:37.727 04:29:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2840034 root 20 0 20.1t 206080 34944 S 0.0 0.3 0:00.58 reactor_2 00:34:37.727 04:29:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:37.727 04:29:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:37.727 04:29:46 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:37.727 04:29:46 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:37.727 04:29:46 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:37.727 04:29:46 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:37.727 04:29:46 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:37.727 04:29:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:34:37.727 [2024-07-23 04:29:46.505183] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:34:37.727 [2024-07-23 04:29:46.505336] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:37.986 04:29:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:34:37.986 04:29:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:34:37.986 04:29:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:34:37.986 [2024-07-23 04:29:46.725917] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:37.986 04:29:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 2840025 0 00:34:37.986 04:29:46 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2840025 0 idle 00:34:37.986 04:29:46 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2840025 00:34:37.986 04:29:46 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:37.986 04:29:46 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:37.986 04:29:46 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:37.986 04:29:46 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:37.986 04:29:46 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:37.986 04:29:46 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:37.986 04:29:46 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:37.986 04:29:46 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2840025 -w 256 00:34:37.986 04:29:46 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:38.244 04:29:46 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2840025 root 20 0 20.1t 206080 34944 S 6.7 0.3 0:02.47 reactor_0' 00:34:38.244 04:29:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2840025 root 20 0 20.1t 206080 34944 S 6.7 0.3 0:02.47 reactor_0 00:34:38.244 04:29:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:38.244 04:29:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:38.244 04:29:46 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=6.7 00:34:38.244 04:29:46 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=6 00:34:38.244 04:29:46 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:38.244 04:29:46 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:38.244 04:29:46 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 6 -gt 30 ]] 00:34:38.244 04:29:46 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:38.244 04:29:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:34:38.244 04:29:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:34:38.244 04:29:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:34:38.244 04:29:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 2840025 00:34:38.244 04:29:46 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 2840025 ']' 00:34:38.244 04:29:46 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 2840025 00:34:38.244 04:29:46 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:34:38.244 04:29:46 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:38.244 04:29:46 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2840025 00:34:38.244 04:29:46 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:38.244 04:29:46 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:38.244 04:29:46 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2840025' 00:34:38.244 killing process with pid 2840025 00:34:38.244 04:29:46 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 2840025 00:34:38.244 04:29:46 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 2840025 00:34:40.775 04:29:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:34:40.775 04:29:49 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:34:40.775 04:29:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:34:40.775 04:29:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:40.775 04:29:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:34:40.775 04:29:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=2841159 00:34:40.775 04:29:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:40.775 04:29:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:34:40.775 04:29:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 2841159 /var/tmp/spdk.sock 00:34:40.775 04:29:49 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 2841159 ']' 00:34:40.775 04:29:49 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:40.775 04:29:49 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:40.775 04:29:49 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:40.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:40.775 04:29:49 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:40.775 04:29:49 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:40.775 [2024-07-23 04:29:49.086977] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:40.775 [2024-07-23 04:29:49.087099] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2841159 ] 00:34:40.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.775 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:40.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.775 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:40.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.775 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:40.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:40.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.776 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:40.776 [2024-07-23 04:29:49.312041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:41.034 [2024-07-23 04:29:49.601494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:41.034 [2024-07-23 04:29:49.601564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:41.034 [2024-07-23 04:29:49.601568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:41.601 [2024-07-23 04:29:50.084125] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:41.601 04:29:50 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:41.601 04:29:50 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:34:41.601 04:29:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:34:41.601 04:29:50 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:34:41.860 Malloc0 00:34:41.860 Malloc1 00:34:41.860 Malloc2 00:34:41.860 04:29:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:34:41.860 04:29:50 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:34:41.860 04:29:50 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:41.860 04:29:50 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:34:41.860 5000+0 records in 00:34:41.860 5000+0 records out 00:34:41.860 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0259241 s, 395 MB/s 00:34:41.860 04:29:50 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:34:42.118 AIO0 00:34:42.118 04:29:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 2841159 00:34:42.118 04:29:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 2841159 00:34:42.118 04:29:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=2841159 00:34:42.118 04:29:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:34:42.118 04:29:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:34:42.118 04:29:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:34:42.118 04:29:50 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:34:42.118 04:29:50 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:34:42.118 04:29:50 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:34:42.118 04:29:50 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:42.118 04:29:50 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:34:42.118 04:29:50 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:42.377 04:29:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:34:42.377 04:29:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:34:42.377 04:29:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:34:42.377 04:29:51 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:34:42.377 04:29:51 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:34:42.377 04:29:51 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:34:42.377 04:29:51 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:42.377 04:29:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:34:42.377 04:29:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:42.634 04:29:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:34:42.634 04:29:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:34:42.634 04:29:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:34:42.634 spdk_thread ids are 1 on reactor0. 00:34:42.634 04:29:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:42.634 04:29:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2841159 0 00:34:42.634 04:29:51 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2841159 0 idle 00:34:42.634 04:29:51 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2841159 00:34:42.634 04:29:51 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:42.634 04:29:51 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:42.634 04:29:51 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:42.634 04:29:51 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:42.634 04:29:51 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:42.634 04:29:51 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:42.634 04:29:51 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:42.634 04:29:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2841159 -w 256 00:34:42.634 04:29:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:42.893 04:29:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2841159 root 20 0 20.1t 204288 35840 S 0.0 0.3 0:01.25 reactor_0' 00:34:42.893 04:29:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2841159 root 20 0 20.1t 204288 35840 S 0.0 0.3 0:01.25 reactor_0 00:34:42.893 04:29:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:42.893 04:29:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:42.893 04:29:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:42.893 04:29:51 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:42.893 04:29:51 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:42.893 04:29:51 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:42.893 04:29:51 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:42.893 04:29:51 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:42.893 04:29:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:42.893 04:29:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2841159 1 00:34:42.893 04:29:51 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2841159 1 idle 00:34:42.893 04:29:51 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2841159 00:34:42.893 04:29:51 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:42.893 04:29:51 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:42.893 04:29:51 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2841159 -w 256 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2841162 root 20 0 20.1t 204288 35840 S 0.0 0.3 0:00.00 reactor_1' 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2841162 root 20 0 20.1t 204288 35840 S 0.0 0.3 0:00.00 reactor_1 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2841159 2 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2841159 2 idle 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2841159 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2841159 -w 256 00:34:42.894 04:29:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:43.152 04:29:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2841163 root 20 0 20.1t 204288 35840 S 0.0 0.3 0:00.00 reactor_2' 00:34:43.152 04:29:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2841163 root 20 0 20.1t 204288 35840 S 0.0 0.3 0:00.00 reactor_2 00:34:43.152 04:29:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:43.152 04:29:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:43.152 04:29:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:43.152 04:29:51 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:43.152 04:29:51 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:43.152 04:29:51 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:43.152 04:29:51 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:43.152 04:29:51 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:43.152 04:29:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:34:43.152 04:29:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:34:43.410 [2024-07-23 04:29:52.030537] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:34:43.410 [2024-07-23 04:29:52.030788] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:34:43.410 [2024-07-23 04:29:52.031001] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:43.410 04:29:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:34:43.668 [2024-07-23 04:29:52.267077] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:34:43.668 [2024-07-23 04:29:52.267297] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:43.668 04:29:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:34:43.668 04:29:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2841159 0 00:34:43.668 04:29:52 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2841159 0 busy 00:34:43.668 04:29:52 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2841159 00:34:43.668 04:29:52 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:43.668 04:29:52 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:43.668 04:29:52 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:34:43.668 04:29:52 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:43.668 04:29:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:43.668 04:29:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:43.668 04:29:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2841159 -w 256 00:34:43.668 04:29:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:43.668 04:29:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2841159 root 20 0 20.1t 206976 35840 R 99.9 0.3 0:01.67 reactor_0' 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2841159 root 20 0 20.1t 206976 35840 R 99.9 0.3 0:01.67 reactor_0 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2841159 2 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2841159 2 busy 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2841159 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2841159 -w 256 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2841163 root 20 0 20.1t 206976 35840 R 99.9 0.3 0:00.35 reactor_2' 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2841163 root 20 0 20.1t 206976 35840 R 99.9 0.3 0:00.35 reactor_2 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:43.926 04:29:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:34:44.185 [2024-07-23 04:29:52.844732] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:34:44.185 [2024-07-23 04:29:52.844874] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:44.185 04:29:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:34:44.185 04:29:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 2841159 2 00:34:44.185 04:29:52 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2841159 2 idle 00:34:44.185 04:29:52 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2841159 00:34:44.185 04:29:52 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:44.185 04:29:52 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:44.185 04:29:52 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:44.185 04:29:52 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:44.185 04:29:52 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:44.185 04:29:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:44.185 04:29:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:44.185 04:29:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:44.185 04:29:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2841159 -w 256 00:34:44.443 04:29:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2841163 root 20 0 20.1t 206976 35840 S 0.0 0.3 0:00.57 reactor_2' 00:34:44.443 04:29:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:44.443 04:29:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2841163 root 20 0 20.1t 206976 35840 S 0.0 0.3 0:00.57 reactor_2 00:34:44.443 04:29:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:44.443 04:29:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:44.443 04:29:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:44.443 04:29:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:44.443 04:29:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:44.443 04:29:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:44.443 04:29:53 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:44.443 04:29:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:34:44.701 [2024-07-23 04:29:53.257823] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:34:44.701 [2024-07-23 04:29:53.258021] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:34:44.701 [2024-07-23 04:29:53.258054] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:44.701 04:29:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:34:44.701 04:29:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 2841159 0 00:34:44.701 04:29:53 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2841159 0 idle 00:34:44.701 04:29:53 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2841159 00:34:44.701 04:29:53 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:44.701 04:29:53 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:44.701 04:29:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:44.701 04:29:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:44.701 04:29:53 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:44.702 04:29:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:44.702 04:29:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:44.702 04:29:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2841159 -w 256 00:34:44.702 04:29:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:44.702 04:29:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2841159 root 20 0 20.1t 206976 35840 S 0.0 0.3 0:02.48 reactor_0' 00:34:44.702 04:29:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2841159 root 20 0 20.1t 206976 35840 S 0.0 0.3 0:02.48 reactor_0 00:34:44.702 04:29:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:44.702 04:29:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:44.702 04:29:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:44.702 04:29:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:44.702 04:29:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:44.702 04:29:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:44.702 04:29:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:44.702 04:29:53 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:44.702 04:29:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:34:44.702 04:29:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:34:44.702 04:29:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:34:44.702 04:29:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 2841159 00:34:44.702 04:29:53 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 2841159 ']' 00:34:44.702 04:29:53 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 2841159 00:34:44.702 04:29:53 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:34:44.702 04:29:53 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:44.702 04:29:53 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2841159 00:34:44.959 04:29:53 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:44.959 04:29:53 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:44.959 04:29:53 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2841159' 00:34:44.959 killing process with pid 2841159 00:34:44.959 04:29:53 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 2841159 00:34:44.959 04:29:53 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 2841159 00:34:46.858 04:29:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:34:46.858 04:29:55 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:34:46.858 00:34:46.858 real 0m13.844s 00:34:46.858 user 0m13.873s 00:34:46.858 sys 0m2.545s 00:34:46.858 04:29:55 reactor_set_interrupt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:46.858 04:29:55 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:46.858 ************************************ 00:34:46.858 END TEST reactor_set_interrupt 00:34:46.858 ************************************ 00:34:47.118 04:29:55 -- common/autotest_common.sh@1142 -- # return 0 00:34:47.118 04:29:55 -- spdk/autotest.sh@194 -- # run_test reap_unregistered_poller /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:34:47.118 04:29:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:47.118 04:29:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:47.118 04:29:55 -- common/autotest_common.sh@10 -- # set +x 00:34:47.118 ************************************ 00:34:47.118 START TEST reap_unregistered_poller 00:34:47.118 ************************************ 00:34:47.118 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:34:47.118 * Looking for test storage... 00:34:47.118 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:47.118 04:29:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:34:47.118 04:29:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:34:47.118 04:29:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:47.118 04:29:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:47.118 04:29:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:34:47.118 04:29:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:47.118 04:29:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:34:47.118 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:34:47.118 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:34:47.118 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:34:47.118 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:34:47.118 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:34:47.118 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:34:47.118 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:34:47.118 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:34:47.118 04:29:55 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:34:47.118 04:29:55 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:34:47.118 04:29:55 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:34:47.118 04:29:55 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:34:47.118 04:29:55 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:34:47.118 04:29:55 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:34:47.118 04:29:55 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:34:47.118 04:29:55 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:34:47.118 04:29:55 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:34:47.118 04:29:55 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:34:47.118 04:29:55 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:34:47.118 04:29:55 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:34:47.118 04:29:55 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:34:47.119 04:29:55 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:34:47.119 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:34:47.119 04:29:55 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:34:47.119 04:29:55 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:34:47.119 04:29:55 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:34:47.119 04:29:55 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:47.119 04:29:55 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:34:47.119 04:29:55 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:34:47.119 04:29:55 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:34:47.119 04:29:55 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:34:47.119 04:29:55 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:34:47.119 04:29:55 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:34:47.119 04:29:55 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:34:47.119 04:29:55 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:34:47.119 04:29:55 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:34:47.119 04:29:55 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:34:47.119 04:29:55 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:34:47.119 #define SPDK_CONFIG_H 00:34:47.119 #define SPDK_CONFIG_APPS 1 00:34:47.119 #define SPDK_CONFIG_ARCH native 00:34:47.119 #define SPDK_CONFIG_ASAN 1 00:34:47.119 #undef SPDK_CONFIG_AVAHI 00:34:47.119 #undef SPDK_CONFIG_CET 00:34:47.119 #define SPDK_CONFIG_COVERAGE 1 00:34:47.119 #define SPDK_CONFIG_CROSS_PREFIX 00:34:47.119 #define SPDK_CONFIG_CRYPTO 1 00:34:47.119 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:34:47.119 #undef SPDK_CONFIG_CUSTOMOCF 00:34:47.119 #undef SPDK_CONFIG_DAOS 00:34:47.119 #define SPDK_CONFIG_DAOS_DIR 00:34:47.119 #define SPDK_CONFIG_DEBUG 1 00:34:47.119 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:34:47.119 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:34:47.119 #define SPDK_CONFIG_DPDK_INC_DIR 00:34:47.119 #define SPDK_CONFIG_DPDK_LIB_DIR 00:34:47.119 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:34:47.119 #undef SPDK_CONFIG_DPDK_UADK 00:34:47.119 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:34:47.119 #define SPDK_CONFIG_EXAMPLES 1 00:34:47.119 #undef SPDK_CONFIG_FC 00:34:47.119 #define SPDK_CONFIG_FC_PATH 00:34:47.119 #define SPDK_CONFIG_FIO_PLUGIN 1 00:34:47.119 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:34:47.119 #undef SPDK_CONFIG_FUSE 00:34:47.119 #undef SPDK_CONFIG_FUZZER 00:34:47.119 #define SPDK_CONFIG_FUZZER_LIB 00:34:47.119 #undef SPDK_CONFIG_GOLANG 00:34:47.119 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:34:47.119 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:34:47.119 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:34:47.119 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:34:47.119 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:34:47.119 #undef SPDK_CONFIG_HAVE_LIBBSD 00:34:47.119 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:34:47.119 #define SPDK_CONFIG_IDXD 1 00:34:47.119 #define SPDK_CONFIG_IDXD_KERNEL 1 00:34:47.119 #define SPDK_CONFIG_IPSEC_MB 1 00:34:47.119 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:34:47.119 #define SPDK_CONFIG_ISAL 1 00:34:47.120 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:34:47.120 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:34:47.120 #define SPDK_CONFIG_LIBDIR 00:34:47.120 #undef SPDK_CONFIG_LTO 00:34:47.120 #define SPDK_CONFIG_MAX_LCORES 128 00:34:47.120 #define SPDK_CONFIG_NVME_CUSE 1 00:34:47.120 #undef SPDK_CONFIG_OCF 00:34:47.120 #define SPDK_CONFIG_OCF_PATH 00:34:47.120 #define SPDK_CONFIG_OPENSSL_PATH 00:34:47.120 #undef SPDK_CONFIG_PGO_CAPTURE 00:34:47.120 #define SPDK_CONFIG_PGO_DIR 00:34:47.120 #undef SPDK_CONFIG_PGO_USE 00:34:47.120 #define SPDK_CONFIG_PREFIX /usr/local 00:34:47.120 #undef SPDK_CONFIG_RAID5F 00:34:47.120 #undef SPDK_CONFIG_RBD 00:34:47.120 #define SPDK_CONFIG_RDMA 1 00:34:47.120 #define SPDK_CONFIG_RDMA_PROV verbs 00:34:47.120 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:34:47.120 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:34:47.120 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:34:47.120 #define SPDK_CONFIG_SHARED 1 00:34:47.120 #undef SPDK_CONFIG_SMA 00:34:47.120 #define SPDK_CONFIG_TESTS 1 00:34:47.120 #undef SPDK_CONFIG_TSAN 00:34:47.120 #define SPDK_CONFIG_UBLK 1 00:34:47.120 #define SPDK_CONFIG_UBSAN 1 00:34:47.120 #undef SPDK_CONFIG_UNIT_TESTS 00:34:47.120 #undef SPDK_CONFIG_URING 00:34:47.120 #define SPDK_CONFIG_URING_PATH 00:34:47.120 #undef SPDK_CONFIG_URING_ZNS 00:34:47.120 #undef SPDK_CONFIG_USDT 00:34:47.120 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:34:47.120 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:34:47.120 #undef SPDK_CONFIG_VFIO_USER 00:34:47.120 #define SPDK_CONFIG_VFIO_USER_DIR 00:34:47.120 #define SPDK_CONFIG_VHOST 1 00:34:47.120 #define SPDK_CONFIG_VIRTIO 1 00:34:47.120 #undef SPDK_CONFIG_VTUNE 00:34:47.120 #define SPDK_CONFIG_VTUNE_DIR 00:34:47.120 #define SPDK_CONFIG_WERROR 1 00:34:47.120 #define SPDK_CONFIG_WPDK_DIR 00:34:47.120 #undef SPDK_CONFIG_XNVME 00:34:47.120 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:34:47.120 04:29:55 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:34:47.120 04:29:55 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:47.120 04:29:55 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:47.120 04:29:55 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:47.120 04:29:55 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.120 04:29:55 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.120 04:29:55 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.120 04:29:55 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:34:47.120 04:29:55 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:34:47.120 04:29:55 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 1 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 0 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 1 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 0 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:34:47.120 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:34:47.121 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:34:47.121 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:34:47.121 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:34:47.121 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:34:47.121 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:34:47.121 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:34:47.121 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:34:47.380 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:34:47.380 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:34:47.380 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:34:47.380 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:34:47.380 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:34:47.380 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:34:47.380 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:34:47.380 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:34:47.380 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:34:47.380 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:34:47.380 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:34:47.380 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:34:47.380 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:34:47.380 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:34:47.380 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 1 00:34:47.380 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:34:47.380 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 1 00:34:47.380 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:34:47.380 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:34:47.380 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:34:47.380 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 00:34:47.380 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:34:47.380 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 1 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 0 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 0 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@167 -- # : 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 0 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@200 -- # cat 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@263 -- # export valgrind= 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@263 -- # valgrind= 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@269 -- # uname -s 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:34:47.381 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 1 -eq 1 ]] 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@276 -- # export HUGE_EVEN_ALLOC=yes 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@276 -- # HUGE_EVEN_ALLOC=yes 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@279 -- # MAKE=make 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@299 -- # TEST_MODE= 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@318 -- # [[ -z 2842351 ]] 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@318 -- # kill -0 2842351 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@331 -- # local mount target_dir 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.AkldTA 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.AkldTA/tests/interrupt /tmp/spdk.AkldTA 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@327 -- # df -T 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=954302464 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4330127360 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=54950014976 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742305280 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=6792290304 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=30866341888 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871150592 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4808704 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=12338671616 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348461056 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=9789440 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=30870290432 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871154688 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=864256 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:34:47.382 * Looking for test storage... 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@368 -- # local target_space new_size 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@372 -- # mount=/ 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@374 -- # target_space=54950014976 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@381 -- # new_size=9006882816 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:47.382 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@389 -- # return 0 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # set -o errtrace 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # true 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@1689 -- # xtrace_fd 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:34:47.382 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:34:47.382 04:29:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:34:47.382 04:29:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:34:47.382 04:29:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:34:47.382 04:29:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:34:47.382 04:29:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:34:47.383 04:29:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:34:47.383 04:29:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:34:47.383 04:29:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:34:47.383 04:29:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:34:47.383 04:29:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:34:47.383 04:29:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.383 04:29:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:34:47.383 04:29:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=2842491 00:34:47.383 04:29:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:47.383 04:29:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:34:47.383 04:29:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 2842491 /var/tmp/spdk.sock 00:34:47.383 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@829 -- # '[' -z 2842491 ']' 00:34:47.383 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.383 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:47.383 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:47.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:47.383 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:47.383 04:29:55 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:34:47.383 [2024-07-23 04:29:56.064730] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:47.383 [2024-07-23 04:29:56.064848] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2842491 ] 00:34:47.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.641 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:47.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.641 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:47.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.641 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:47.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.642 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:47.642 [2024-07-23 04:29:56.286651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:47.900 [2024-07-23 04:29:56.548165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:47.900 [2024-07-23 04:29:56.548228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.900 [2024-07-23 04:29:56.548229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:48.467 [2024-07-23 04:29:56.969585] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:48.467 04:29:56 reap_unregistered_poller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:48.467 04:29:56 reap_unregistered_poller -- common/autotest_common.sh@862 -- # return 0 00:34:48.467 04:29:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:34:48.467 04:29:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:34:48.467 04:29:56 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.467 04:29:56 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:34:48.467 04:29:57 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.467 04:29:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:34:48.467 "name": "app_thread", 00:34:48.467 "id": 1, 00:34:48.467 "active_pollers": [], 00:34:48.467 "timed_pollers": [ 00:34:48.467 { 00:34:48.467 "name": "rpc_subsystem_poll_servers", 00:34:48.467 "id": 1, 00:34:48.467 "state": "waiting", 00:34:48.467 "run_count": 0, 00:34:48.467 "busy_count": 0, 00:34:48.467 "period_ticks": 10000000 00:34:48.467 } 00:34:48.467 ], 00:34:48.467 "paused_pollers": [] 00:34:48.467 }' 00:34:48.467 04:29:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:34:48.467 04:29:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:34:48.467 04:29:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:34:48.467 04:29:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:34:48.467 04:29:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:34:48.467 04:29:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:34:48.467 04:29:57 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:34:48.467 04:29:57 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:48.467 04:29:57 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:34:48.467 5000+0 records in 00:34:48.467 5000+0 records out 00:34:48.467 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0253187 s, 404 MB/s 00:34:48.467 04:29:57 reap_unregistered_poller -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:34:48.726 AIO0 00:34:48.726 04:29:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:48.984 04:29:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:34:48.984 04:29:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:34:48.984 04:29:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:34:48.984 04:29:57 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.985 04:29:57 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:34:49.243 04:29:57 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.243 04:29:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:34:49.243 "name": "app_thread", 00:34:49.243 "id": 1, 00:34:49.243 "active_pollers": [], 00:34:49.243 "timed_pollers": [ 00:34:49.243 { 00:34:49.243 "name": "rpc_subsystem_poll_servers", 00:34:49.243 "id": 1, 00:34:49.243 "state": "waiting", 00:34:49.243 "run_count": 0, 00:34:49.243 "busy_count": 0, 00:34:49.243 "period_ticks": 10000000 00:34:49.243 } 00:34:49.243 ], 00:34:49.243 "paused_pollers": [] 00:34:49.243 }' 00:34:49.243 04:29:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:34:49.243 04:29:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:34:49.243 04:29:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:34:49.243 04:29:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:34:49.243 04:29:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:34:49.243 04:29:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:34:49.243 04:29:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:34:49.243 04:29:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 2842491 00:34:49.243 04:29:57 reap_unregistered_poller -- common/autotest_common.sh@948 -- # '[' -z 2842491 ']' 00:34:49.243 04:29:57 reap_unregistered_poller -- common/autotest_common.sh@952 -- # kill -0 2842491 00:34:49.243 04:29:57 reap_unregistered_poller -- common/autotest_common.sh@953 -- # uname 00:34:49.243 04:29:57 reap_unregistered_poller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:49.243 04:29:57 reap_unregistered_poller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2842491 00:34:49.244 04:29:57 reap_unregistered_poller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:49.244 04:29:57 reap_unregistered_poller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:49.244 04:29:57 reap_unregistered_poller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2842491' 00:34:49.244 killing process with pid 2842491 00:34:49.244 04:29:57 reap_unregistered_poller -- common/autotest_common.sh@967 -- # kill 2842491 00:34:49.244 04:29:57 reap_unregistered_poller -- common/autotest_common.sh@972 -- # wait 2842491 00:34:51.206 04:29:59 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:34:51.206 04:29:59 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:34:51.206 00:34:51.206 real 0m3.932s 00:34:51.206 user 0m3.428s 00:34:51.206 sys 0m0.838s 00:34:51.206 04:29:59 reap_unregistered_poller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:51.206 04:29:59 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:34:51.206 ************************************ 00:34:51.206 END TEST reap_unregistered_poller 00:34:51.206 ************************************ 00:34:51.206 04:29:59 -- common/autotest_common.sh@1142 -- # return 0 00:34:51.206 04:29:59 -- spdk/autotest.sh@198 -- # uname -s 00:34:51.206 04:29:59 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:34:51.206 04:29:59 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:34:51.206 04:29:59 -- spdk/autotest.sh@205 -- # [[ 1 -eq 0 ]] 00:34:51.206 04:29:59 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:34:51.206 04:29:59 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:34:51.206 04:29:59 -- spdk/autotest.sh@260 -- # timing_exit lib 00:34:51.206 04:29:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:51.206 04:29:59 -- common/autotest_common.sh@10 -- # set +x 00:34:51.206 04:29:59 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:34:51.206 04:29:59 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:34:51.206 04:29:59 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:34:51.206 04:29:59 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:34:51.206 04:29:59 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:34:51.206 04:29:59 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:34:51.206 04:29:59 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:34:51.206 04:29:59 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:34:51.206 04:29:59 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:34:51.206 04:29:59 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:34:51.206 04:29:59 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:34:51.206 04:29:59 -- spdk/autotest.sh@347 -- # '[' 1 -eq 1 ']' 00:34:51.206 04:29:59 -- spdk/autotest.sh@348 -- # run_test compress_compdev /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:34:51.206 04:29:59 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:51.206 04:29:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:51.206 04:29:59 -- common/autotest_common.sh@10 -- # set +x 00:34:51.206 ************************************ 00:34:51.206 START TEST compress_compdev 00:34:51.206 ************************************ 00:34:51.206 04:29:59 compress_compdev -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:34:51.206 * Looking for test storage... 00:34:51.206 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:34:51.206 04:29:59 compress_compdev -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:34:51.206 04:29:59 compress_compdev -- nvmf/common.sh@7 -- # uname -s 00:34:51.206 04:29:59 compress_compdev -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:51.206 04:29:59 compress_compdev -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:51.206 04:29:59 compress_compdev -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:51.206 04:29:59 compress_compdev -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:51.206 04:29:59 compress_compdev -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:51.206 04:29:59 compress_compdev -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:51.206 04:29:59 compress_compdev -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:51.206 04:29:59 compress_compdev -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:51.206 04:29:59 compress_compdev -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:51.206 04:29:59 compress_compdev -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:51.206 04:29:59 compress_compdev -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:34:51.206 04:29:59 compress_compdev -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:34:51.206 04:29:59 compress_compdev -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:51.206 04:29:59 compress_compdev -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:51.206 04:29:59 compress_compdev -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:34:51.206 04:29:59 compress_compdev -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:51.207 04:29:59 compress_compdev -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:34:51.207 04:29:59 compress_compdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:51.207 04:29:59 compress_compdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:51.207 04:29:59 compress_compdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:51.207 04:29:59 compress_compdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.207 04:29:59 compress_compdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.207 04:29:59 compress_compdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.207 04:29:59 compress_compdev -- paths/export.sh@5 -- # export PATH 00:34:51.207 04:29:59 compress_compdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.207 04:29:59 compress_compdev -- nvmf/common.sh@47 -- # : 0 00:34:51.207 04:29:59 compress_compdev -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:51.207 04:29:59 compress_compdev -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:51.207 04:29:59 compress_compdev -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:51.207 04:29:59 compress_compdev -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:51.207 04:29:59 compress_compdev -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:51.207 04:29:59 compress_compdev -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:51.207 04:29:59 compress_compdev -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:51.207 04:29:59 compress_compdev -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:51.207 04:29:59 compress_compdev -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:34:51.207 04:29:59 compress_compdev -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:34:51.207 04:29:59 compress_compdev -- compress/compress.sh@82 -- # test_type=compdev 00:34:51.207 04:29:59 compress_compdev -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:34:51.207 04:29:59 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:34:51.207 04:29:59 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=2843239 00:34:51.207 04:29:59 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:51.207 04:29:59 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 2843239 00:34:51.207 04:29:59 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:34:51.207 04:29:59 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2843239 ']' 00:34:51.207 04:29:59 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:51.207 04:29:59 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:51.207 04:29:59 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:51.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:51.207 04:29:59 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:51.207 04:29:59 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:34:51.207 [2024-07-23 04:29:59.980254] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:34:51.207 [2024-07-23 04:29:59.980381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2843239 ] 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:51.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.466 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:51.466 [2024-07-23 04:30:00.198019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:51.725 [2024-07-23 04:30:00.465387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:51.725 [2024-07-23 04:30:00.465390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:53.103 [2024-07-23 04:30:01.842173] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:34:54.040 04:30:02 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:54.040 04:30:02 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:34:54.040 04:30:02 compress_compdev -- compress/compress.sh@74 -- # create_vols 00:34:54.040 04:30:02 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:54.040 04:30:02 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:34:57.327 [2024-07-23 04:30:05.727498] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d220 PMD being used: compress_qat 00:34:57.328 04:30:05 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:34:57.328 04:30:05 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:34:57.328 04:30:05 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:57.328 04:30:05 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:34:57.328 04:30:05 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:57.328 04:30:05 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:57.328 04:30:05 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:57.328 04:30:05 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:34:57.587 [ 00:34:57.587 { 00:34:57.587 "name": "Nvme0n1", 00:34:57.587 "aliases": [ 00:34:57.587 "599b0158-6b3d-4e1e-ab50-5494126037d8" 00:34:57.587 ], 00:34:57.587 "product_name": "NVMe disk", 00:34:57.587 "block_size": 512, 00:34:57.587 "num_blocks": 3907029168, 00:34:57.587 "uuid": "599b0158-6b3d-4e1e-ab50-5494126037d8", 00:34:57.587 "assigned_rate_limits": { 00:34:57.587 "rw_ios_per_sec": 0, 00:34:57.587 "rw_mbytes_per_sec": 0, 00:34:57.587 "r_mbytes_per_sec": 0, 00:34:57.587 "w_mbytes_per_sec": 0 00:34:57.587 }, 00:34:57.587 "claimed": false, 00:34:57.587 "zoned": false, 00:34:57.587 "supported_io_types": { 00:34:57.587 "read": true, 00:34:57.587 "write": true, 00:34:57.587 "unmap": true, 00:34:57.587 "flush": true, 00:34:57.587 "reset": true, 00:34:57.587 "nvme_admin": true, 00:34:57.587 "nvme_io": true, 00:34:57.587 "nvme_io_md": false, 00:34:57.587 "write_zeroes": true, 00:34:57.587 "zcopy": false, 00:34:57.587 "get_zone_info": false, 00:34:57.587 "zone_management": false, 00:34:57.587 "zone_append": false, 00:34:57.587 "compare": false, 00:34:57.587 "compare_and_write": false, 00:34:57.587 "abort": true, 00:34:57.587 "seek_hole": false, 00:34:57.587 "seek_data": false, 00:34:57.587 "copy": false, 00:34:57.587 "nvme_iov_md": false 00:34:57.587 }, 00:34:57.587 "driver_specific": { 00:34:57.587 "nvme": [ 00:34:57.587 { 00:34:57.587 "pci_address": "0000:d8:00.0", 00:34:57.587 "trid": { 00:34:57.587 "trtype": "PCIe", 00:34:57.587 "traddr": "0000:d8:00.0" 00:34:57.587 }, 00:34:57.587 "ctrlr_data": { 00:34:57.587 "cntlid": 0, 00:34:57.587 "vendor_id": "0x8086", 00:34:57.587 "model_number": "INTEL SSDPE2KX020T8", 00:34:57.587 "serial_number": "BTLJ125505KA2P0BGN", 00:34:57.587 "firmware_revision": "VDV10170", 00:34:57.587 "oacs": { 00:34:57.587 "security": 0, 00:34:57.587 "format": 1, 00:34:57.587 "firmware": 1, 00:34:57.587 "ns_manage": 1 00:34:57.587 }, 00:34:57.587 "multi_ctrlr": false, 00:34:57.587 "ana_reporting": false 00:34:57.587 }, 00:34:57.587 "vs": { 00:34:57.587 "nvme_version": "1.2" 00:34:57.587 }, 00:34:57.587 "ns_data": { 00:34:57.587 "id": 1, 00:34:57.587 "can_share": false 00:34:57.587 } 00:34:57.587 } 00:34:57.587 ], 00:34:57.587 "mp_policy": "active_passive" 00:34:57.587 } 00:34:57.587 } 00:34:57.587 ] 00:34:57.587 04:30:06 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:34:57.587 04:30:06 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:34:57.846 [2024-07-23 04:30:06.442968] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d3e0 PMD being used: compress_qat 00:34:58.782 8a38c66d-074d-4ec0-a41a-42a21736632e 00:34:58.782 04:30:07 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:34:59.041 cde6c88c-3ecc-44bb-8397-be78b0d6ac1b 00:34:59.041 04:30:07 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:34:59.041 04:30:07 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:34:59.041 04:30:07 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:59.041 04:30:07 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:34:59.041 04:30:07 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:59.041 04:30:07 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:59.041 04:30:07 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:59.300 04:30:07 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:34:59.559 [ 00:34:59.559 { 00:34:59.559 "name": "cde6c88c-3ecc-44bb-8397-be78b0d6ac1b", 00:34:59.559 "aliases": [ 00:34:59.559 "lvs0/lv0" 00:34:59.559 ], 00:34:59.559 "product_name": "Logical Volume", 00:34:59.559 "block_size": 512, 00:34:59.559 "num_blocks": 204800, 00:34:59.559 "uuid": "cde6c88c-3ecc-44bb-8397-be78b0d6ac1b", 00:34:59.559 "assigned_rate_limits": { 00:34:59.559 "rw_ios_per_sec": 0, 00:34:59.559 "rw_mbytes_per_sec": 0, 00:34:59.559 "r_mbytes_per_sec": 0, 00:34:59.559 "w_mbytes_per_sec": 0 00:34:59.559 }, 00:34:59.559 "claimed": false, 00:34:59.559 "zoned": false, 00:34:59.559 "supported_io_types": { 00:34:59.559 "read": true, 00:34:59.559 "write": true, 00:34:59.559 "unmap": true, 00:34:59.559 "flush": false, 00:34:59.559 "reset": true, 00:34:59.559 "nvme_admin": false, 00:34:59.559 "nvme_io": false, 00:34:59.559 "nvme_io_md": false, 00:34:59.559 "write_zeroes": true, 00:34:59.559 "zcopy": false, 00:34:59.559 "get_zone_info": false, 00:34:59.559 "zone_management": false, 00:34:59.560 "zone_append": false, 00:34:59.560 "compare": false, 00:34:59.560 "compare_and_write": false, 00:34:59.560 "abort": false, 00:34:59.560 "seek_hole": true, 00:34:59.560 "seek_data": true, 00:34:59.560 "copy": false, 00:34:59.560 "nvme_iov_md": false 00:34:59.560 }, 00:34:59.560 "driver_specific": { 00:34:59.560 "lvol": { 00:34:59.560 "lvol_store_uuid": "8a38c66d-074d-4ec0-a41a-42a21736632e", 00:34:59.560 "base_bdev": "Nvme0n1", 00:34:59.560 "thin_provision": true, 00:34:59.560 "num_allocated_clusters": 0, 00:34:59.560 "snapshot": false, 00:34:59.560 "clone": false, 00:34:59.560 "esnap_clone": false 00:34:59.560 } 00:34:59.560 } 00:34:59.560 } 00:34:59.560 ] 00:34:59.560 04:30:08 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:34:59.560 04:30:08 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:34:59.560 04:30:08 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:34:59.819 [2024-07-23 04:30:08.452932] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:34:59.819 COMP_lvs0/lv0 00:34:59.819 04:30:08 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:34:59.819 04:30:08 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:34:59.819 04:30:08 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:59.819 04:30:08 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:34:59.819 04:30:08 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:59.819 04:30:08 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:59.819 04:30:08 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:00.078 04:30:08 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:35:00.338 [ 00:35:00.338 { 00:35:00.338 "name": "COMP_lvs0/lv0", 00:35:00.338 "aliases": [ 00:35:00.338 "adba464d-678d-5543-885f-bb6baabe6299" 00:35:00.338 ], 00:35:00.338 "product_name": "compress", 00:35:00.338 "block_size": 512, 00:35:00.338 "num_blocks": 200704, 00:35:00.338 "uuid": "adba464d-678d-5543-885f-bb6baabe6299", 00:35:00.338 "assigned_rate_limits": { 00:35:00.338 "rw_ios_per_sec": 0, 00:35:00.338 "rw_mbytes_per_sec": 0, 00:35:00.338 "r_mbytes_per_sec": 0, 00:35:00.338 "w_mbytes_per_sec": 0 00:35:00.338 }, 00:35:00.338 "claimed": false, 00:35:00.338 "zoned": false, 00:35:00.338 "supported_io_types": { 00:35:00.338 "read": true, 00:35:00.338 "write": true, 00:35:00.338 "unmap": false, 00:35:00.338 "flush": false, 00:35:00.338 "reset": false, 00:35:00.338 "nvme_admin": false, 00:35:00.338 "nvme_io": false, 00:35:00.338 "nvme_io_md": false, 00:35:00.338 "write_zeroes": true, 00:35:00.338 "zcopy": false, 00:35:00.338 "get_zone_info": false, 00:35:00.338 "zone_management": false, 00:35:00.338 "zone_append": false, 00:35:00.338 "compare": false, 00:35:00.338 "compare_and_write": false, 00:35:00.338 "abort": false, 00:35:00.338 "seek_hole": false, 00:35:00.338 "seek_data": false, 00:35:00.338 "copy": false, 00:35:00.338 "nvme_iov_md": false 00:35:00.338 }, 00:35:00.338 "driver_specific": { 00:35:00.338 "compress": { 00:35:00.338 "name": "COMP_lvs0/lv0", 00:35:00.338 "base_bdev_name": "cde6c88c-3ecc-44bb-8397-be78b0d6ac1b", 00:35:00.338 "pm_path": "/tmp/pmem/19b9a20f-a6c2-443d-8c14-c2015e9ee021" 00:35:00.338 } 00:35:00.338 } 00:35:00.338 } 00:35:00.338 ] 00:35:00.338 04:30:08 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:35:00.338 04:30:08 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:00.338 [2024-07-23 04:30:09.037850] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e0000101e0 PMD being used: compress_qat 00:35:00.338 [2024-07-23 04:30:09.041103] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d5a0 PMD being used: compress_qat 00:35:00.338 Running I/O for 3 seconds... 00:35:03.627 00:35:03.627 Latency(us) 00:35:03.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.627 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:35:03.627 Verification LBA range: start 0x0 length 0x3100 00:35:03.627 COMP_lvs0/lv0 : 3.01 3891.85 15.20 0.00 0.00 8171.72 135.17 13631.49 00:35:03.627 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:35:03.627 Verification LBA range: start 0x3100 length 0x3100 00:35:03.627 COMP_lvs0/lv0 : 3.01 3978.33 15.54 0.00 0.00 8002.52 126.16 12949.91 00:35:03.627 =================================================================================================================== 00:35:03.627 Total : 7870.18 30.74 0.00 0.00 8086.19 126.16 13631.49 00:35:03.627 0 00:35:03.627 04:30:12 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:35:03.627 04:30:12 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:35:03.627 04:30:12 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:35:03.886 04:30:12 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:35:03.886 04:30:12 compress_compdev -- compress/compress.sh@78 -- # killprocess 2843239 00:35:03.886 04:30:12 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2843239 ']' 00:35:03.886 04:30:12 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2843239 00:35:03.886 04:30:12 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:35:03.886 04:30:12 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:03.886 04:30:12 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2843239 00:35:03.886 04:30:12 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:03.886 04:30:12 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:03.886 04:30:12 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2843239' 00:35:03.886 killing process with pid 2843239 00:35:03.886 04:30:12 compress_compdev -- common/autotest_common.sh@967 -- # kill 2843239 00:35:03.886 Received shutdown signal, test time was about 3.000000 seconds 00:35:03.886 00:35:03.886 Latency(us) 00:35:03.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.886 =================================================================================================================== 00:35:03.886 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:03.886 04:30:12 compress_compdev -- common/autotest_common.sh@972 -- # wait 2843239 00:35:08.072 04:30:16 compress_compdev -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:35:08.072 04:30:16 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:35:08.072 04:30:16 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=2846439 00:35:08.072 04:30:16 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:08.072 04:30:16 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:35:08.072 04:30:16 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 2846439 00:35:08.072 04:30:16 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2846439 ']' 00:35:08.072 04:30:16 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:08.072 04:30:16 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:08.072 04:30:16 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:08.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:08.072 04:30:16 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:08.072 04:30:16 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:35:08.072 [2024-07-23 04:30:16.266273] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:35:08.072 [2024-07-23 04:30:16.266367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846439 ] 00:35:08.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.072 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:08.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.072 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:08.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.072 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:08.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.072 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:08.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:08.073 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:08.073 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:08.073 [2024-07-23 04:30:16.452055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:08.073 [2024-07-23 04:30:16.729674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.073 [2024-07-23 04:30:16.729684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:09.452 [2024-07-23 04:30:18.110804] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:35:10.458 04:30:18 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:10.458 04:30:18 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:35:10.458 04:30:18 compress_compdev -- compress/compress.sh@74 -- # create_vols 512 00:35:10.458 04:30:18 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:10.458 04:30:18 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:35:13.744 [2024-07-23 04:30:22.031529] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d220 PMD being used: compress_qat 00:35:13.744 04:30:22 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:35:13.744 04:30:22 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:35:13.744 04:30:22 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:13.744 04:30:22 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:35:13.744 04:30:22 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:13.744 04:30:22 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:13.744 04:30:22 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:13.744 04:30:22 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:35:13.744 [ 00:35:13.744 { 00:35:13.744 "name": "Nvme0n1", 00:35:13.744 "aliases": [ 00:35:13.744 "d720311e-d58e-4e37-bcae-c81239f44a34" 00:35:13.744 ], 00:35:13.744 "product_name": "NVMe disk", 00:35:13.744 "block_size": 512, 00:35:13.744 "num_blocks": 3907029168, 00:35:13.744 "uuid": "d720311e-d58e-4e37-bcae-c81239f44a34", 00:35:13.744 "assigned_rate_limits": { 00:35:13.744 "rw_ios_per_sec": 0, 00:35:13.744 "rw_mbytes_per_sec": 0, 00:35:13.744 "r_mbytes_per_sec": 0, 00:35:13.744 "w_mbytes_per_sec": 0 00:35:13.744 }, 00:35:13.744 "claimed": false, 00:35:13.744 "zoned": false, 00:35:13.744 "supported_io_types": { 00:35:13.744 "read": true, 00:35:13.744 "write": true, 00:35:13.744 "unmap": true, 00:35:13.744 "flush": true, 00:35:13.744 "reset": true, 00:35:13.744 "nvme_admin": true, 00:35:13.744 "nvme_io": true, 00:35:13.744 "nvme_io_md": false, 00:35:13.744 "write_zeroes": true, 00:35:13.744 "zcopy": false, 00:35:13.744 "get_zone_info": false, 00:35:13.744 "zone_management": false, 00:35:13.744 "zone_append": false, 00:35:13.744 "compare": false, 00:35:13.744 "compare_and_write": false, 00:35:13.744 "abort": true, 00:35:13.744 "seek_hole": false, 00:35:13.744 "seek_data": false, 00:35:13.744 "copy": false, 00:35:13.744 "nvme_iov_md": false 00:35:13.744 }, 00:35:13.744 "driver_specific": { 00:35:13.744 "nvme": [ 00:35:13.744 { 00:35:13.744 "pci_address": "0000:d8:00.0", 00:35:13.744 "trid": { 00:35:13.744 "trtype": "PCIe", 00:35:13.744 "traddr": "0000:d8:00.0" 00:35:13.744 }, 00:35:13.744 "ctrlr_data": { 00:35:13.744 "cntlid": 0, 00:35:13.744 "vendor_id": "0x8086", 00:35:13.744 "model_number": "INTEL SSDPE2KX020T8", 00:35:13.744 "serial_number": "BTLJ125505KA2P0BGN", 00:35:13.744 "firmware_revision": "VDV10170", 00:35:13.744 "oacs": { 00:35:13.744 "security": 0, 00:35:13.744 "format": 1, 00:35:13.744 "firmware": 1, 00:35:13.744 "ns_manage": 1 00:35:13.744 }, 00:35:13.744 "multi_ctrlr": false, 00:35:13.744 "ana_reporting": false 00:35:13.744 }, 00:35:13.744 "vs": { 00:35:13.744 "nvme_version": "1.2" 00:35:13.744 }, 00:35:13.744 "ns_data": { 00:35:13.744 "id": 1, 00:35:13.744 "can_share": false 00:35:13.745 } 00:35:13.745 } 00:35:13.745 ], 00:35:13.745 "mp_policy": "active_passive" 00:35:13.745 } 00:35:13.745 } 00:35:13.745 ] 00:35:13.745 04:30:22 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:35:13.745 04:30:22 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:35:14.003 [2024-07-23 04:30:22.583399] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d3e0 PMD being used: compress_qat 00:35:14.937 fc1f84dc-b352-4b0e-8e0f-dad6df448147 00:35:14.937 04:30:23 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:35:15.196 5ce22fa3-4522-4229-9945-6bab82b81fb9 00:35:15.196 04:30:23 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:35:15.196 04:30:23 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:35:15.196 04:30:23 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:15.196 04:30:23 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:35:15.196 04:30:23 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:15.196 04:30:23 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:15.196 04:30:23 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:15.454 04:30:24 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:35:15.454 [ 00:35:15.454 { 00:35:15.454 "name": "5ce22fa3-4522-4229-9945-6bab82b81fb9", 00:35:15.454 "aliases": [ 00:35:15.454 "lvs0/lv0" 00:35:15.454 ], 00:35:15.454 "product_name": "Logical Volume", 00:35:15.454 "block_size": 512, 00:35:15.454 "num_blocks": 204800, 00:35:15.454 "uuid": "5ce22fa3-4522-4229-9945-6bab82b81fb9", 00:35:15.454 "assigned_rate_limits": { 00:35:15.454 "rw_ios_per_sec": 0, 00:35:15.454 "rw_mbytes_per_sec": 0, 00:35:15.454 "r_mbytes_per_sec": 0, 00:35:15.454 "w_mbytes_per_sec": 0 00:35:15.454 }, 00:35:15.454 "claimed": false, 00:35:15.454 "zoned": false, 00:35:15.454 "supported_io_types": { 00:35:15.454 "read": true, 00:35:15.454 "write": true, 00:35:15.454 "unmap": true, 00:35:15.454 "flush": false, 00:35:15.454 "reset": true, 00:35:15.454 "nvme_admin": false, 00:35:15.454 "nvme_io": false, 00:35:15.454 "nvme_io_md": false, 00:35:15.454 "write_zeroes": true, 00:35:15.454 "zcopy": false, 00:35:15.454 "get_zone_info": false, 00:35:15.454 "zone_management": false, 00:35:15.454 "zone_append": false, 00:35:15.454 "compare": false, 00:35:15.454 "compare_and_write": false, 00:35:15.454 "abort": false, 00:35:15.454 "seek_hole": true, 00:35:15.454 "seek_data": true, 00:35:15.454 "copy": false, 00:35:15.454 "nvme_iov_md": false 00:35:15.454 }, 00:35:15.454 "driver_specific": { 00:35:15.454 "lvol": { 00:35:15.454 "lvol_store_uuid": "fc1f84dc-b352-4b0e-8e0f-dad6df448147", 00:35:15.454 "base_bdev": "Nvme0n1", 00:35:15.454 "thin_provision": true, 00:35:15.454 "num_allocated_clusters": 0, 00:35:15.455 "snapshot": false, 00:35:15.455 "clone": false, 00:35:15.455 "esnap_clone": false 00:35:15.455 } 00:35:15.455 } 00:35:15.455 } 00:35:15.455 ] 00:35:15.455 04:30:24 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:35:15.455 04:30:24 compress_compdev -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:35:15.455 04:30:24 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:35:15.713 [2024-07-23 04:30:24.458343] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:35:15.713 COMP_lvs0/lv0 00:35:15.713 04:30:24 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:35:15.713 04:30:24 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:35:15.713 04:30:24 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:15.713 04:30:24 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:35:15.713 04:30:24 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:15.713 04:30:24 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:15.713 04:30:24 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:15.971 04:30:24 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:35:16.229 [ 00:35:16.229 { 00:35:16.229 "name": "COMP_lvs0/lv0", 00:35:16.229 "aliases": [ 00:35:16.229 "38d5f745-7c72-5616-a3bf-7b9650f3a8a2" 00:35:16.229 ], 00:35:16.229 "product_name": "compress", 00:35:16.230 "block_size": 512, 00:35:16.230 "num_blocks": 200704, 00:35:16.230 "uuid": "38d5f745-7c72-5616-a3bf-7b9650f3a8a2", 00:35:16.230 "assigned_rate_limits": { 00:35:16.230 "rw_ios_per_sec": 0, 00:35:16.230 "rw_mbytes_per_sec": 0, 00:35:16.230 "r_mbytes_per_sec": 0, 00:35:16.230 "w_mbytes_per_sec": 0 00:35:16.230 }, 00:35:16.230 "claimed": false, 00:35:16.230 "zoned": false, 00:35:16.230 "supported_io_types": { 00:35:16.230 "read": true, 00:35:16.230 "write": true, 00:35:16.230 "unmap": false, 00:35:16.230 "flush": false, 00:35:16.230 "reset": false, 00:35:16.230 "nvme_admin": false, 00:35:16.230 "nvme_io": false, 00:35:16.230 "nvme_io_md": false, 00:35:16.230 "write_zeroes": true, 00:35:16.230 "zcopy": false, 00:35:16.230 "get_zone_info": false, 00:35:16.230 "zone_management": false, 00:35:16.230 "zone_append": false, 00:35:16.230 "compare": false, 00:35:16.230 "compare_and_write": false, 00:35:16.230 "abort": false, 00:35:16.230 "seek_hole": false, 00:35:16.230 "seek_data": false, 00:35:16.230 "copy": false, 00:35:16.230 "nvme_iov_md": false 00:35:16.230 }, 00:35:16.230 "driver_specific": { 00:35:16.230 "compress": { 00:35:16.230 "name": "COMP_lvs0/lv0", 00:35:16.230 "base_bdev_name": "5ce22fa3-4522-4229-9945-6bab82b81fb9", 00:35:16.230 "pm_path": "/tmp/pmem/570c7327-6276-46f4-850a-4197f3759a89" 00:35:16.230 } 00:35:16.230 } 00:35:16.230 } 00:35:16.230 ] 00:35:16.230 04:30:24 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:35:16.230 04:30:24 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:16.488 [2024-07-23 04:30:25.058878] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e0000101e0 PMD being used: compress_qat 00:35:16.488 [2024-07-23 04:30:25.062168] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d4c0 PMD being used: compress_qat 00:35:16.488 Running I/O for 3 seconds... 00:35:19.777 00:35:19.777 Latency(us) 00:35:19.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.777 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:35:19.777 Verification LBA range: start 0x0 length 0x3100 00:35:19.777 COMP_lvs0/lv0 : 3.00 3800.04 14.84 0.00 0.00 8368.95 135.17 15833.50 00:35:19.777 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:35:19.777 Verification LBA range: start 0x3100 length 0x3100 00:35:19.777 COMP_lvs0/lv0 : 3.01 3932.90 15.36 0.00 0.00 8089.39 126.16 15728.64 00:35:19.777 =================================================================================================================== 00:35:19.777 Total : 7732.95 30.21 0.00 0.00 8226.68 126.16 15833.50 00:35:19.777 0 00:35:19.777 04:30:28 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:35:19.777 04:30:28 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:35:19.777 04:30:28 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:35:20.036 04:30:28 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:35:20.036 04:30:28 compress_compdev -- compress/compress.sh@78 -- # killprocess 2846439 00:35:20.036 04:30:28 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2846439 ']' 00:35:20.036 04:30:28 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2846439 00:35:20.036 04:30:28 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:35:20.036 04:30:28 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:20.036 04:30:28 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2846439 00:35:20.036 04:30:28 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:20.036 04:30:28 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:20.036 04:30:28 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2846439' 00:35:20.036 killing process with pid 2846439 00:35:20.036 04:30:28 compress_compdev -- common/autotest_common.sh@967 -- # kill 2846439 00:35:20.036 Received shutdown signal, test time was about 3.000000 seconds 00:35:20.036 00:35:20.036 Latency(us) 00:35:20.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:20.036 =================================================================================================================== 00:35:20.036 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:20.036 04:30:28 compress_compdev -- common/autotest_common.sh@972 -- # wait 2846439 00:35:24.230 04:30:32 compress_compdev -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:35:24.230 04:30:32 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:35:24.230 04:30:32 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=2849099 00:35:24.230 04:30:32 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:24.230 04:30:32 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:35:24.230 04:30:32 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 2849099 00:35:24.230 04:30:32 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2849099 ']' 00:35:24.230 04:30:32 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:24.230 04:30:32 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:24.230 04:30:32 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:24.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:24.230 04:30:32 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:24.230 04:30:32 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:35:24.230 [2024-07-23 04:30:32.296600] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:35:24.230 [2024-07-23 04:30:32.296723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2849099 ] 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:24.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.231 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:24.231 [2024-07-23 04:30:32.508756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:24.231 [2024-07-23 04:30:32.780032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:24.231 [2024-07-23 04:30:32.780034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:25.608 [2024-07-23 04:30:34.170530] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:35:26.175 04:30:34 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:26.175 04:30:34 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:35:26.175 04:30:34 compress_compdev -- compress/compress.sh@74 -- # create_vols 4096 00:35:26.175 04:30:34 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:35:26.175 04:30:34 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:29.459 [2024-07-23 04:30:38.031532] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d220 PMD being used: compress_qat 00:35:29.459 04:30:38 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:35:29.459 04:30:38 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:35:29.459 04:30:38 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:29.459 04:30:38 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:35:29.459 04:30:38 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:29.459 04:30:38 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:29.459 04:30:38 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:29.718 04:30:38 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:35:29.978 [ 00:35:29.978 { 00:35:29.978 "name": "Nvme0n1", 00:35:29.978 "aliases": [ 00:35:29.978 "266bdd13-8584-46e2-b494-6274b0d7a509" 00:35:29.978 ], 00:35:29.978 "product_name": "NVMe disk", 00:35:29.978 "block_size": 512, 00:35:29.978 "num_blocks": 3907029168, 00:35:29.978 "uuid": "266bdd13-8584-46e2-b494-6274b0d7a509", 00:35:29.978 "assigned_rate_limits": { 00:35:29.978 "rw_ios_per_sec": 0, 00:35:29.978 "rw_mbytes_per_sec": 0, 00:35:29.978 "r_mbytes_per_sec": 0, 00:35:29.978 "w_mbytes_per_sec": 0 00:35:29.978 }, 00:35:29.978 "claimed": false, 00:35:29.978 "zoned": false, 00:35:29.978 "supported_io_types": { 00:35:29.978 "read": true, 00:35:29.978 "write": true, 00:35:29.978 "unmap": true, 00:35:29.978 "flush": true, 00:35:29.978 "reset": true, 00:35:29.978 "nvme_admin": true, 00:35:29.978 "nvme_io": true, 00:35:29.978 "nvme_io_md": false, 00:35:29.978 "write_zeroes": true, 00:35:29.978 "zcopy": false, 00:35:29.978 "get_zone_info": false, 00:35:29.978 "zone_management": false, 00:35:29.978 "zone_append": false, 00:35:29.978 "compare": false, 00:35:29.978 "compare_and_write": false, 00:35:29.978 "abort": true, 00:35:29.978 "seek_hole": false, 00:35:29.978 "seek_data": false, 00:35:29.978 "copy": false, 00:35:29.978 "nvme_iov_md": false 00:35:29.978 }, 00:35:29.978 "driver_specific": { 00:35:29.978 "nvme": [ 00:35:29.978 { 00:35:29.978 "pci_address": "0000:d8:00.0", 00:35:29.978 "trid": { 00:35:29.978 "trtype": "PCIe", 00:35:29.978 "traddr": "0000:d8:00.0" 00:35:29.978 }, 00:35:29.978 "ctrlr_data": { 00:35:29.978 "cntlid": 0, 00:35:29.978 "vendor_id": "0x8086", 00:35:29.978 "model_number": "INTEL SSDPE2KX020T8", 00:35:29.978 "serial_number": "BTLJ125505KA2P0BGN", 00:35:29.979 "firmware_revision": "VDV10170", 00:35:29.979 "oacs": { 00:35:29.979 "security": 0, 00:35:29.979 "format": 1, 00:35:29.979 "firmware": 1, 00:35:29.979 "ns_manage": 1 00:35:29.979 }, 00:35:29.979 "multi_ctrlr": false, 00:35:29.979 "ana_reporting": false 00:35:29.979 }, 00:35:29.979 "vs": { 00:35:29.979 "nvme_version": "1.2" 00:35:29.979 }, 00:35:29.979 "ns_data": { 00:35:29.979 "id": 1, 00:35:29.979 "can_share": false 00:35:29.979 } 00:35:29.979 } 00:35:29.979 ], 00:35:29.979 "mp_policy": "active_passive" 00:35:29.979 } 00:35:29.979 } 00:35:29.979 ] 00:35:29.979 04:30:38 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:35:29.979 04:30:38 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:35:29.979 [2024-07-23 04:30:38.738993] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d3e0 PMD being used: compress_qat 00:35:31.421 3dea6608-ca09-444d-98aa-f0f82651dba1 00:35:31.421 04:30:39 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:35:31.421 32d07874-a793-4a51-ae51-ca8a017ac041 00:35:31.421 04:30:40 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:35:31.421 04:30:40 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:35:31.421 04:30:40 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:31.421 04:30:40 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:35:31.421 04:30:40 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:31.421 04:30:40 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:31.421 04:30:40 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:31.681 04:30:40 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:35:31.940 [ 00:35:31.940 { 00:35:31.940 "name": "32d07874-a793-4a51-ae51-ca8a017ac041", 00:35:31.940 "aliases": [ 00:35:31.940 "lvs0/lv0" 00:35:31.940 ], 00:35:31.940 "product_name": "Logical Volume", 00:35:31.940 "block_size": 512, 00:35:31.940 "num_blocks": 204800, 00:35:31.940 "uuid": "32d07874-a793-4a51-ae51-ca8a017ac041", 00:35:31.940 "assigned_rate_limits": { 00:35:31.940 "rw_ios_per_sec": 0, 00:35:31.940 "rw_mbytes_per_sec": 0, 00:35:31.940 "r_mbytes_per_sec": 0, 00:35:31.940 "w_mbytes_per_sec": 0 00:35:31.940 }, 00:35:31.940 "claimed": false, 00:35:31.940 "zoned": false, 00:35:31.940 "supported_io_types": { 00:35:31.940 "read": true, 00:35:31.940 "write": true, 00:35:31.940 "unmap": true, 00:35:31.940 "flush": false, 00:35:31.940 "reset": true, 00:35:31.940 "nvme_admin": false, 00:35:31.940 "nvme_io": false, 00:35:31.940 "nvme_io_md": false, 00:35:31.940 "write_zeroes": true, 00:35:31.940 "zcopy": false, 00:35:31.940 "get_zone_info": false, 00:35:31.940 "zone_management": false, 00:35:31.940 "zone_append": false, 00:35:31.940 "compare": false, 00:35:31.940 "compare_and_write": false, 00:35:31.940 "abort": false, 00:35:31.940 "seek_hole": true, 00:35:31.940 "seek_data": true, 00:35:31.940 "copy": false, 00:35:31.940 "nvme_iov_md": false 00:35:31.940 }, 00:35:31.940 "driver_specific": { 00:35:31.940 "lvol": { 00:35:31.940 "lvol_store_uuid": "3dea6608-ca09-444d-98aa-f0f82651dba1", 00:35:31.940 "base_bdev": "Nvme0n1", 00:35:31.940 "thin_provision": true, 00:35:31.940 "num_allocated_clusters": 0, 00:35:31.940 "snapshot": false, 00:35:31.940 "clone": false, 00:35:31.940 "esnap_clone": false 00:35:31.940 } 00:35:31.940 } 00:35:31.941 } 00:35:31.941 ] 00:35:31.941 04:30:40 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:35:31.941 04:30:40 compress_compdev -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:35:31.941 04:30:40 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:35:32.199 [2024-07-23 04:30:40.740984] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:35:32.199 COMP_lvs0/lv0 00:35:32.199 04:30:40 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:35:32.199 04:30:40 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:35:32.199 04:30:40 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:32.199 04:30:40 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:35:32.199 04:30:40 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:32.199 04:30:40 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:32.199 04:30:40 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:32.457 04:30:40 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:35:32.457 [ 00:35:32.457 { 00:35:32.457 "name": "COMP_lvs0/lv0", 00:35:32.457 "aliases": [ 00:35:32.457 "e8edfea1-d04d-56f7-8662-f16fd3d04dce" 00:35:32.457 ], 00:35:32.457 "product_name": "compress", 00:35:32.457 "block_size": 4096, 00:35:32.457 "num_blocks": 25088, 00:35:32.457 "uuid": "e8edfea1-d04d-56f7-8662-f16fd3d04dce", 00:35:32.457 "assigned_rate_limits": { 00:35:32.457 "rw_ios_per_sec": 0, 00:35:32.457 "rw_mbytes_per_sec": 0, 00:35:32.457 "r_mbytes_per_sec": 0, 00:35:32.457 "w_mbytes_per_sec": 0 00:35:32.457 }, 00:35:32.457 "claimed": false, 00:35:32.457 "zoned": false, 00:35:32.457 "supported_io_types": { 00:35:32.457 "read": true, 00:35:32.457 "write": true, 00:35:32.457 "unmap": false, 00:35:32.457 "flush": false, 00:35:32.457 "reset": false, 00:35:32.457 "nvme_admin": false, 00:35:32.457 "nvme_io": false, 00:35:32.457 "nvme_io_md": false, 00:35:32.457 "write_zeroes": true, 00:35:32.457 "zcopy": false, 00:35:32.457 "get_zone_info": false, 00:35:32.457 "zone_management": false, 00:35:32.457 "zone_append": false, 00:35:32.457 "compare": false, 00:35:32.457 "compare_and_write": false, 00:35:32.457 "abort": false, 00:35:32.457 "seek_hole": false, 00:35:32.457 "seek_data": false, 00:35:32.457 "copy": false, 00:35:32.457 "nvme_iov_md": false 00:35:32.457 }, 00:35:32.457 "driver_specific": { 00:35:32.457 "compress": { 00:35:32.457 "name": "COMP_lvs0/lv0", 00:35:32.457 "base_bdev_name": "32d07874-a793-4a51-ae51-ca8a017ac041", 00:35:32.457 "pm_path": "/tmp/pmem/785875d6-2703-443f-90a0-483c775e2e11" 00:35:32.457 } 00:35:32.457 } 00:35:32.457 } 00:35:32.457 ] 00:35:32.457 04:30:41 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:35:32.457 04:30:41 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:32.716 [2024-07-23 04:30:41.334425] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e0000101e0 PMD being used: compress_qat 00:35:32.716 [2024-07-23 04:30:41.337801] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d4c0 PMD being used: compress_qat 00:35:32.716 Running I/O for 3 seconds... 00:35:36.004 00:35:36.004 Latency(us) 00:35:36.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:36.004 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:35:36.004 Verification LBA range: start 0x0 length 0x3100 00:35:36.004 COMP_lvs0/lv0 : 3.01 3792.12 14.81 0.00 0.00 8378.69 181.04 14155.78 00:35:36.004 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:35:36.004 Verification LBA range: start 0x3100 length 0x3100 00:35:36.004 COMP_lvs0/lv0 : 3.01 3870.47 15.12 0.00 0.00 8225.24 172.03 13526.63 00:35:36.004 =================================================================================================================== 00:35:36.004 Total : 7662.59 29.93 0.00 0.00 8301.21 172.03 14155.78 00:35:36.004 0 00:35:36.004 04:30:44 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:35:36.004 04:30:44 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:35:36.004 04:30:44 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:35:36.263 04:30:44 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:35:36.263 04:30:44 compress_compdev -- compress/compress.sh@78 -- # killprocess 2849099 00:35:36.263 04:30:44 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2849099 ']' 00:35:36.263 04:30:44 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2849099 00:35:36.263 04:30:44 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:35:36.263 04:30:44 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:36.263 04:30:44 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2849099 00:35:36.263 04:30:44 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:36.263 04:30:44 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:36.263 04:30:44 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2849099' 00:35:36.263 killing process with pid 2849099 00:35:36.263 04:30:44 compress_compdev -- common/autotest_common.sh@967 -- # kill 2849099 00:35:36.263 Received shutdown signal, test time was about 3.000000 seconds 00:35:36.263 00:35:36.263 Latency(us) 00:35:36.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:36.263 =================================================================================================================== 00:35:36.263 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:36.263 04:30:44 compress_compdev -- common/autotest_common.sh@972 -- # wait 2849099 00:35:40.457 04:30:48 compress_compdev -- compress/compress.sh@89 -- # run_bdevio 00:35:40.457 04:30:48 compress_compdev -- compress/compress.sh@50 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:35:40.457 04:30:48 compress_compdev -- compress/compress.sh@55 -- # bdevio_pid=2851753 00:35:40.457 04:30:48 compress_compdev -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:40.457 04:30:48 compress_compdev -- compress/compress.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json -w 00:35:40.457 04:30:48 compress_compdev -- compress/compress.sh@57 -- # waitforlisten 2851753 00:35:40.457 04:30:48 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2851753 ']' 00:35:40.457 04:30:48 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:40.457 04:30:48 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:40.457 04:30:48 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:40.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:40.457 04:30:48 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:40.457 04:30:48 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:35:40.457 [2024-07-23 04:30:48.505215] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:35:40.457 [2024-07-23 04:30:48.505329] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851753 ] 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:40.457 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.457 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:40.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.458 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:40.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.458 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:40.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.458 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:40.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.458 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:40.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.458 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:40.458 [2024-07-23 04:30:48.720515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:40.458 [2024-07-23 04:30:49.005827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:40.458 [2024-07-23 04:30:49.005885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:40.458 [2024-07-23 04:30:49.005890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:41.836 [2024-07-23 04:30:50.416051] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:35:42.403 04:30:51 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:42.403 04:30:51 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:35:42.403 04:30:51 compress_compdev -- compress/compress.sh@58 -- # create_vols 00:35:42.403 04:30:51 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:35:42.403 04:30:51 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:45.691 [2024-07-23 04:30:54.256947] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000026280 PMD being used: compress_qat 00:35:45.691 04:30:54 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:35:45.691 04:30:54 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:35:45.691 04:30:54 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:45.691 04:30:54 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:35:45.691 04:30:54 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:45.691 04:30:54 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:45.691 04:30:54 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:45.950 04:30:54 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:35:46.209 [ 00:35:46.209 { 00:35:46.209 "name": "Nvme0n1", 00:35:46.209 "aliases": [ 00:35:46.209 "d9461726-87ed-4673-beb1-cdfe6a9e817f" 00:35:46.209 ], 00:35:46.209 "product_name": "NVMe disk", 00:35:46.209 "block_size": 512, 00:35:46.209 "num_blocks": 3907029168, 00:35:46.209 "uuid": "d9461726-87ed-4673-beb1-cdfe6a9e817f", 00:35:46.209 "assigned_rate_limits": { 00:35:46.209 "rw_ios_per_sec": 0, 00:35:46.209 "rw_mbytes_per_sec": 0, 00:35:46.209 "r_mbytes_per_sec": 0, 00:35:46.209 "w_mbytes_per_sec": 0 00:35:46.209 }, 00:35:46.209 "claimed": false, 00:35:46.209 "zoned": false, 00:35:46.209 "supported_io_types": { 00:35:46.209 "read": true, 00:35:46.209 "write": true, 00:35:46.209 "unmap": true, 00:35:46.209 "flush": true, 00:35:46.209 "reset": true, 00:35:46.209 "nvme_admin": true, 00:35:46.209 "nvme_io": true, 00:35:46.209 "nvme_io_md": false, 00:35:46.209 "write_zeroes": true, 00:35:46.209 "zcopy": false, 00:35:46.209 "get_zone_info": false, 00:35:46.209 "zone_management": false, 00:35:46.209 "zone_append": false, 00:35:46.209 "compare": false, 00:35:46.209 "compare_and_write": false, 00:35:46.209 "abort": true, 00:35:46.209 "seek_hole": false, 00:35:46.209 "seek_data": false, 00:35:46.209 "copy": false, 00:35:46.209 "nvme_iov_md": false 00:35:46.209 }, 00:35:46.209 "driver_specific": { 00:35:46.209 "nvme": [ 00:35:46.209 { 00:35:46.209 "pci_address": "0000:d8:00.0", 00:35:46.209 "trid": { 00:35:46.209 "trtype": "PCIe", 00:35:46.209 "traddr": "0000:d8:00.0" 00:35:46.209 }, 00:35:46.209 "ctrlr_data": { 00:35:46.209 "cntlid": 0, 00:35:46.209 "vendor_id": "0x8086", 00:35:46.209 "model_number": "INTEL SSDPE2KX020T8", 00:35:46.209 "serial_number": "BTLJ125505KA2P0BGN", 00:35:46.209 "firmware_revision": "VDV10170", 00:35:46.209 "oacs": { 00:35:46.209 "security": 0, 00:35:46.209 "format": 1, 00:35:46.209 "firmware": 1, 00:35:46.209 "ns_manage": 1 00:35:46.209 }, 00:35:46.209 "multi_ctrlr": false, 00:35:46.209 "ana_reporting": false 00:35:46.209 }, 00:35:46.209 "vs": { 00:35:46.209 "nvme_version": "1.2" 00:35:46.209 }, 00:35:46.209 "ns_data": { 00:35:46.209 "id": 1, 00:35:46.209 "can_share": false 00:35:46.209 } 00:35:46.209 } 00:35:46.209 ], 00:35:46.209 "mp_policy": "active_passive" 00:35:46.209 } 00:35:46.209 } 00:35:46.209 ] 00:35:46.209 04:30:54 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:35:46.209 04:30:54 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:35:46.209 [2024-07-23 04:30:54.973465] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000026440 PMD being used: compress_qat 00:35:47.586 273e0cf5-0e8c-4812-af6f-1e686139352c 00:35:47.586 04:30:56 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:35:47.586 6ced2ffb-f542-4052-8344-1a3b8ddd0899 00:35:47.586 04:30:56 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:35:47.586 04:30:56 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:35:47.586 04:30:56 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:47.586 04:30:56 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:35:47.586 04:30:56 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:47.586 04:30:56 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:47.586 04:30:56 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:47.844 04:30:56 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:35:48.103 [ 00:35:48.103 { 00:35:48.103 "name": "6ced2ffb-f542-4052-8344-1a3b8ddd0899", 00:35:48.103 "aliases": [ 00:35:48.103 "lvs0/lv0" 00:35:48.103 ], 00:35:48.103 "product_name": "Logical Volume", 00:35:48.103 "block_size": 512, 00:35:48.103 "num_blocks": 204800, 00:35:48.103 "uuid": "6ced2ffb-f542-4052-8344-1a3b8ddd0899", 00:35:48.103 "assigned_rate_limits": { 00:35:48.103 "rw_ios_per_sec": 0, 00:35:48.103 "rw_mbytes_per_sec": 0, 00:35:48.103 "r_mbytes_per_sec": 0, 00:35:48.103 "w_mbytes_per_sec": 0 00:35:48.103 }, 00:35:48.103 "claimed": false, 00:35:48.103 "zoned": false, 00:35:48.103 "supported_io_types": { 00:35:48.103 "read": true, 00:35:48.103 "write": true, 00:35:48.103 "unmap": true, 00:35:48.103 "flush": false, 00:35:48.103 "reset": true, 00:35:48.103 "nvme_admin": false, 00:35:48.104 "nvme_io": false, 00:35:48.104 "nvme_io_md": false, 00:35:48.104 "write_zeroes": true, 00:35:48.104 "zcopy": false, 00:35:48.104 "get_zone_info": false, 00:35:48.104 "zone_management": false, 00:35:48.104 "zone_append": false, 00:35:48.104 "compare": false, 00:35:48.104 "compare_and_write": false, 00:35:48.104 "abort": false, 00:35:48.104 "seek_hole": true, 00:35:48.104 "seek_data": true, 00:35:48.104 "copy": false, 00:35:48.104 "nvme_iov_md": false 00:35:48.104 }, 00:35:48.104 "driver_specific": { 00:35:48.104 "lvol": { 00:35:48.104 "lvol_store_uuid": "273e0cf5-0e8c-4812-af6f-1e686139352c", 00:35:48.104 "base_bdev": "Nvme0n1", 00:35:48.104 "thin_provision": true, 00:35:48.104 "num_allocated_clusters": 0, 00:35:48.104 "snapshot": false, 00:35:48.104 "clone": false, 00:35:48.104 "esnap_clone": false 00:35:48.104 } 00:35:48.104 } 00:35:48.104 } 00:35:48.104 ] 00:35:48.104 04:30:56 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:35:48.104 04:30:56 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:35:48.104 04:30:56 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:35:48.363 [2024-07-23 04:30:56.935512] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:35:48.363 COMP_lvs0/lv0 00:35:48.363 04:30:56 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:35:48.363 04:30:56 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:35:48.363 04:30:56 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:48.363 04:30:56 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:35:48.363 04:30:56 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:48.363 04:30:56 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:48.363 04:30:56 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:48.622 04:30:57 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:35:48.622 [ 00:35:48.622 { 00:35:48.622 "name": "COMP_lvs0/lv0", 00:35:48.622 "aliases": [ 00:35:48.622 "10b2059b-ccb2-52db-8667-18d62c5aa63a" 00:35:48.622 ], 00:35:48.622 "product_name": "compress", 00:35:48.622 "block_size": 512, 00:35:48.622 "num_blocks": 200704, 00:35:48.622 "uuid": "10b2059b-ccb2-52db-8667-18d62c5aa63a", 00:35:48.622 "assigned_rate_limits": { 00:35:48.622 "rw_ios_per_sec": 0, 00:35:48.622 "rw_mbytes_per_sec": 0, 00:35:48.622 "r_mbytes_per_sec": 0, 00:35:48.622 "w_mbytes_per_sec": 0 00:35:48.622 }, 00:35:48.622 "claimed": false, 00:35:48.622 "zoned": false, 00:35:48.622 "supported_io_types": { 00:35:48.622 "read": true, 00:35:48.622 "write": true, 00:35:48.622 "unmap": false, 00:35:48.622 "flush": false, 00:35:48.622 "reset": false, 00:35:48.622 "nvme_admin": false, 00:35:48.622 "nvme_io": false, 00:35:48.622 "nvme_io_md": false, 00:35:48.622 "write_zeroes": true, 00:35:48.622 "zcopy": false, 00:35:48.622 "get_zone_info": false, 00:35:48.622 "zone_management": false, 00:35:48.622 "zone_append": false, 00:35:48.622 "compare": false, 00:35:48.622 "compare_and_write": false, 00:35:48.622 "abort": false, 00:35:48.622 "seek_hole": false, 00:35:48.622 "seek_data": false, 00:35:48.622 "copy": false, 00:35:48.622 "nvme_iov_md": false 00:35:48.622 }, 00:35:48.622 "driver_specific": { 00:35:48.622 "compress": { 00:35:48.622 "name": "COMP_lvs0/lv0", 00:35:48.622 "base_bdev_name": "6ced2ffb-f542-4052-8344-1a3b8ddd0899", 00:35:48.622 "pm_path": "/tmp/pmem/56d5d2e8-c3dd-40ea-9bd8-3d71c9342e84" 00:35:48.622 } 00:35:48.622 } 00:35:48.622 } 00:35:48.622 ] 00:35:48.622 04:30:57 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:35:48.622 04:30:57 compress_compdev -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:35:48.881 [2024-07-23 04:30:57.512489] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e0000171e0 PMD being used: compress_qat 00:35:48.881 I/O targets: 00:35:48.881 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:35:48.881 00:35:48.881 00:35:48.881 CUnit - A unit testing framework for C - Version 2.1-3 00:35:48.881 http://cunit.sourceforge.net/ 00:35:48.881 00:35:48.881 00:35:48.881 Suite: bdevio tests on: COMP_lvs0/lv0 00:35:48.881 Test: blockdev write read block ...passed 00:35:48.881 Test: blockdev write zeroes read block ...passed 00:35:48.881 Test: blockdev write zeroes read no split ...passed 00:35:48.881 Test: blockdev write zeroes read split ...passed 00:35:48.881 Test: blockdev write zeroes read split partial ...passed 00:35:48.881 Test: blockdev reset ...[2024-07-23 04:30:57.654206] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:35:48.881 passed 00:35:48.881 Test: blockdev write read 8 blocks ...passed 00:35:48.881 Test: blockdev write read size > 128k ...passed 00:35:48.881 Test: blockdev write read invalid size ...passed 00:35:48.881 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:48.881 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:48.881 Test: blockdev write read max offset ...passed 00:35:48.881 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:48.881 Test: blockdev writev readv 8 blocks ...passed 00:35:48.881 Test: blockdev writev readv 30 x 1block ...passed 00:35:48.881 Test: blockdev writev readv block ...passed 00:35:49.169 Test: blockdev writev readv size > 128k ...passed 00:35:49.169 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:49.169 Test: blockdev comparev and writev ...passed 00:35:49.169 Test: blockdev nvme passthru rw ...passed 00:35:49.169 Test: blockdev nvme passthru vendor specific ...passed 00:35:49.169 Test: blockdev nvme admin passthru ...passed 00:35:49.169 Test: blockdev copy ...passed 00:35:49.169 00:35:49.169 Run Summary: Type Total Ran Passed Failed Inactive 00:35:49.169 suites 1 1 n/a 0 0 00:35:49.169 tests 23 23 23 0 0 00:35:49.169 asserts 130 130 130 0 n/a 00:35:49.169 00:35:49.169 Elapsed time = 0.389 seconds 00:35:49.169 0 00:35:49.169 04:30:57 compress_compdev -- compress/compress.sh@60 -- # destroy_vols 00:35:49.169 04:30:57 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:35:49.429 04:30:57 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:35:49.429 04:30:58 compress_compdev -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:35:49.429 04:30:58 compress_compdev -- compress/compress.sh@62 -- # killprocess 2851753 00:35:49.429 04:30:58 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2851753 ']' 00:35:49.429 04:30:58 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2851753 00:35:49.429 04:30:58 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:35:49.687 04:30:58 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:49.687 04:30:58 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2851753 00:35:49.687 04:30:58 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:49.687 04:30:58 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:49.687 04:30:58 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2851753' 00:35:49.687 killing process with pid 2851753 00:35:49.687 04:30:58 compress_compdev -- common/autotest_common.sh@967 -- # kill 2851753 00:35:49.687 04:30:58 compress_compdev -- common/autotest_common.sh@972 -- # wait 2851753 00:35:53.881 04:31:01 compress_compdev -- compress/compress.sh@91 -- # '[' 1 -eq 1 ']' 00:35:53.881 04:31:01 compress_compdev -- compress/compress.sh@92 -- # run_bdevperf 64 16384 30 00:35:53.881 04:31:01 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:35:53.881 04:31:01 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=2853896 00:35:53.881 04:31:01 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:53.881 04:31:01 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 64 -o 16384 -w verify -t 30 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:35:53.881 04:31:01 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 2853896 00:35:53.881 04:31:01 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2853896 ']' 00:35:53.881 04:31:01 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:53.881 04:31:01 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:53.881 04:31:01 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:53.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:53.881 04:31:01 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:53.881 04:31:01 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:35:53.881 [2024-07-23 04:31:01.898231] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:35:53.881 [2024-07-23 04:31:01.898362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2853896 ] 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:53.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.881 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:53.881 [2024-07-23 04:31:02.112936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:53.881 [2024-07-23 04:31:02.403944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:53.881 [2024-07-23 04:31:02.403946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:55.261 [2024-07-23 04:31:03.776555] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:35:55.830 04:31:04 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:55.830 04:31:04 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:35:55.830 04:31:04 compress_compdev -- compress/compress.sh@74 -- # create_vols 00:35:55.830 04:31:04 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:55.830 04:31:04 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:35:59.123 [2024-07-23 04:31:07.640801] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d220 PMD being used: compress_qat 00:35:59.123 04:31:07 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:35:59.123 04:31:07 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:35:59.123 04:31:07 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:35:59.123 04:31:07 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:35:59.123 04:31:07 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:35:59.123 04:31:07 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:35:59.123 04:31:07 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:59.382 04:31:07 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:35:59.382 [ 00:35:59.382 { 00:35:59.382 "name": "Nvme0n1", 00:35:59.382 "aliases": [ 00:35:59.382 "bd198552-75eb-4237-8c8f-ed21833bbf5b" 00:35:59.382 ], 00:35:59.382 "product_name": "NVMe disk", 00:35:59.382 "block_size": 512, 00:35:59.382 "num_blocks": 3907029168, 00:35:59.382 "uuid": "bd198552-75eb-4237-8c8f-ed21833bbf5b", 00:35:59.382 "assigned_rate_limits": { 00:35:59.382 "rw_ios_per_sec": 0, 00:35:59.382 "rw_mbytes_per_sec": 0, 00:35:59.382 "r_mbytes_per_sec": 0, 00:35:59.382 "w_mbytes_per_sec": 0 00:35:59.382 }, 00:35:59.382 "claimed": false, 00:35:59.382 "zoned": false, 00:35:59.382 "supported_io_types": { 00:35:59.382 "read": true, 00:35:59.382 "write": true, 00:35:59.382 "unmap": true, 00:35:59.382 "flush": true, 00:35:59.382 "reset": true, 00:35:59.382 "nvme_admin": true, 00:35:59.382 "nvme_io": true, 00:35:59.382 "nvme_io_md": false, 00:35:59.382 "write_zeroes": true, 00:35:59.382 "zcopy": false, 00:35:59.382 "get_zone_info": false, 00:35:59.382 "zone_management": false, 00:35:59.382 "zone_append": false, 00:35:59.382 "compare": false, 00:35:59.382 "compare_and_write": false, 00:35:59.382 "abort": true, 00:35:59.382 "seek_hole": false, 00:35:59.382 "seek_data": false, 00:35:59.382 "copy": false, 00:35:59.382 "nvme_iov_md": false 00:35:59.382 }, 00:35:59.382 "driver_specific": { 00:35:59.382 "nvme": [ 00:35:59.382 { 00:35:59.382 "pci_address": "0000:d8:00.0", 00:35:59.382 "trid": { 00:35:59.382 "trtype": "PCIe", 00:35:59.382 "traddr": "0000:d8:00.0" 00:35:59.382 }, 00:35:59.382 "ctrlr_data": { 00:35:59.382 "cntlid": 0, 00:35:59.382 "vendor_id": "0x8086", 00:35:59.382 "model_number": "INTEL SSDPE2KX020T8", 00:35:59.382 "serial_number": "BTLJ125505KA2P0BGN", 00:35:59.382 "firmware_revision": "VDV10170", 00:35:59.382 "oacs": { 00:35:59.382 "security": 0, 00:35:59.382 "format": 1, 00:35:59.382 "firmware": 1, 00:35:59.382 "ns_manage": 1 00:35:59.382 }, 00:35:59.382 "multi_ctrlr": false, 00:35:59.382 "ana_reporting": false 00:35:59.382 }, 00:35:59.382 "vs": { 00:35:59.382 "nvme_version": "1.2" 00:35:59.382 }, 00:35:59.383 "ns_data": { 00:35:59.383 "id": 1, 00:35:59.383 "can_share": false 00:35:59.383 } 00:35:59.383 } 00:35:59.383 ], 00:35:59.383 "mp_policy": "active_passive" 00:35:59.383 } 00:35:59.383 } 00:35:59.383 ] 00:35:59.383 04:31:08 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:35:59.383 04:31:08 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:35:59.642 [2024-07-23 04:31:08.352165] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d3e0 PMD being used: compress_qat 00:36:01.021 19ce8fe3-44b2-42b3-b24a-ea844eb10f57 00:36:01.021 04:31:09 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:36:01.021 3ee9e882-4083-435c-abe0-27caea92eda0 00:36:01.021 04:31:09 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:36:01.021 04:31:09 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:36:01.021 04:31:09 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:01.021 04:31:09 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:36:01.021 04:31:09 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:01.021 04:31:09 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:01.021 04:31:09 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:01.281 04:31:09 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:36:01.540 [ 00:36:01.540 { 00:36:01.540 "name": "3ee9e882-4083-435c-abe0-27caea92eda0", 00:36:01.540 "aliases": [ 00:36:01.540 "lvs0/lv0" 00:36:01.540 ], 00:36:01.540 "product_name": "Logical Volume", 00:36:01.540 "block_size": 512, 00:36:01.540 "num_blocks": 204800, 00:36:01.540 "uuid": "3ee9e882-4083-435c-abe0-27caea92eda0", 00:36:01.540 "assigned_rate_limits": { 00:36:01.540 "rw_ios_per_sec": 0, 00:36:01.540 "rw_mbytes_per_sec": 0, 00:36:01.540 "r_mbytes_per_sec": 0, 00:36:01.540 "w_mbytes_per_sec": 0 00:36:01.540 }, 00:36:01.540 "claimed": false, 00:36:01.540 "zoned": false, 00:36:01.540 "supported_io_types": { 00:36:01.540 "read": true, 00:36:01.540 "write": true, 00:36:01.540 "unmap": true, 00:36:01.540 "flush": false, 00:36:01.540 "reset": true, 00:36:01.540 "nvme_admin": false, 00:36:01.540 "nvme_io": false, 00:36:01.540 "nvme_io_md": false, 00:36:01.540 "write_zeroes": true, 00:36:01.540 "zcopy": false, 00:36:01.540 "get_zone_info": false, 00:36:01.540 "zone_management": false, 00:36:01.540 "zone_append": false, 00:36:01.540 "compare": false, 00:36:01.540 "compare_and_write": false, 00:36:01.540 "abort": false, 00:36:01.540 "seek_hole": true, 00:36:01.540 "seek_data": true, 00:36:01.540 "copy": false, 00:36:01.540 "nvme_iov_md": false 00:36:01.540 }, 00:36:01.540 "driver_specific": { 00:36:01.540 "lvol": { 00:36:01.540 "lvol_store_uuid": "19ce8fe3-44b2-42b3-b24a-ea844eb10f57", 00:36:01.540 "base_bdev": "Nvme0n1", 00:36:01.540 "thin_provision": true, 00:36:01.540 "num_allocated_clusters": 0, 00:36:01.540 "snapshot": false, 00:36:01.540 "clone": false, 00:36:01.540 "esnap_clone": false 00:36:01.540 } 00:36:01.540 } 00:36:01.540 } 00:36:01.540 ] 00:36:01.540 04:31:10 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:36:01.540 04:31:10 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:36:01.540 04:31:10 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:36:01.800 [2024-07-23 04:31:10.324259] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:36:01.800 COMP_lvs0/lv0 00:36:01.800 04:31:10 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:36:01.800 04:31:10 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:36:01.800 04:31:10 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:01.800 04:31:10 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:36:01.800 04:31:10 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:01.800 04:31:10 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:01.800 04:31:10 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:01.800 04:31:10 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:36:02.059 [ 00:36:02.059 { 00:36:02.059 "name": "COMP_lvs0/lv0", 00:36:02.059 "aliases": [ 00:36:02.059 "f2bef151-4f98-5fec-8c3a-545f365d2af3" 00:36:02.059 ], 00:36:02.059 "product_name": "compress", 00:36:02.059 "block_size": 512, 00:36:02.059 "num_blocks": 200704, 00:36:02.059 "uuid": "f2bef151-4f98-5fec-8c3a-545f365d2af3", 00:36:02.059 "assigned_rate_limits": { 00:36:02.059 "rw_ios_per_sec": 0, 00:36:02.059 "rw_mbytes_per_sec": 0, 00:36:02.059 "r_mbytes_per_sec": 0, 00:36:02.059 "w_mbytes_per_sec": 0 00:36:02.059 }, 00:36:02.059 "claimed": false, 00:36:02.059 "zoned": false, 00:36:02.059 "supported_io_types": { 00:36:02.059 "read": true, 00:36:02.059 "write": true, 00:36:02.059 "unmap": false, 00:36:02.059 "flush": false, 00:36:02.059 "reset": false, 00:36:02.059 "nvme_admin": false, 00:36:02.059 "nvme_io": false, 00:36:02.059 "nvme_io_md": false, 00:36:02.059 "write_zeroes": true, 00:36:02.059 "zcopy": false, 00:36:02.059 "get_zone_info": false, 00:36:02.059 "zone_management": false, 00:36:02.059 "zone_append": false, 00:36:02.059 "compare": false, 00:36:02.059 "compare_and_write": false, 00:36:02.059 "abort": false, 00:36:02.059 "seek_hole": false, 00:36:02.059 "seek_data": false, 00:36:02.059 "copy": false, 00:36:02.059 "nvme_iov_md": false 00:36:02.059 }, 00:36:02.059 "driver_specific": { 00:36:02.059 "compress": { 00:36:02.059 "name": "COMP_lvs0/lv0", 00:36:02.059 "base_bdev_name": "3ee9e882-4083-435c-abe0-27caea92eda0", 00:36:02.059 "pm_path": "/tmp/pmem/d7569edc-42c0-46fa-ad03-8c4af95578bf" 00:36:02.059 } 00:36:02.059 } 00:36:02.059 } 00:36:02.059 ] 00:36:02.059 04:31:10 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:36:02.059 04:31:10 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:36:02.318 [2024-07-23 04:31:10.908439] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e0000101e0 PMD being used: compress_qat 00:36:02.318 [2024-07-23 04:31:10.911597] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d5a0 PMD being used: compress_qat 00:36:02.318 Running I/O for 30 seconds... 00:36:34.439 00:36:34.439 Latency(us) 00:36:34.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:34.439 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 64, IO size: 16384) 00:36:34.439 Verification LBA range: start 0x0 length 0xc40 00:36:34.439 COMP_lvs0/lv0 : 30.01 1577.25 24.64 0.00 0.00 40330.42 779.88 39845.89 00:36:34.439 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 64, IO size: 16384) 00:36:34.439 Verification LBA range: start 0xc40 length 0xc40 00:36:34.439 COMP_lvs0/lv0 : 30.01 5238.05 81.84 0.00 0.00 12106.62 344.06 20866.66 00:36:34.439 =================================================================================================================== 00:36:34.439 Total : 6815.30 106.49 0.00 0.00 18638.66 344.06 39845.89 00:36:34.439 0 00:36:34.439 04:31:41 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:36:34.439 04:31:41 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:36:34.439 04:31:41 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:36:34.439 04:31:41 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:36:34.439 04:31:41 compress_compdev -- compress/compress.sh@78 -- # killprocess 2853896 00:36:34.439 04:31:41 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2853896 ']' 00:36:34.439 04:31:41 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2853896 00:36:34.439 04:31:41 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:36:34.439 04:31:41 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:34.439 04:31:41 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2853896 00:36:34.439 04:31:41 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:34.439 04:31:41 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:34.439 04:31:41 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2853896' 00:36:34.439 killing process with pid 2853896 00:36:34.439 04:31:41 compress_compdev -- common/autotest_common.sh@967 -- # kill 2853896 00:36:34.439 Received shutdown signal, test time was about 30.000000 seconds 00:36:34.439 00:36:34.439 Latency(us) 00:36:34.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:34.439 =================================================================================================================== 00:36:34.439 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:34.439 04:31:41 compress_compdev -- common/autotest_common.sh@972 -- # wait 2853896 00:36:36.350 04:31:45 compress_compdev -- compress/compress.sh@95 -- # export TEST_TRANSPORT=tcp 00:36:36.350 04:31:45 compress_compdev -- compress/compress.sh@95 -- # TEST_TRANSPORT=tcp 00:36:36.350 04:31:45 compress_compdev -- compress/compress.sh@96 -- # NET_TYPE=virt 00:36:36.350 04:31:45 compress_compdev -- compress/compress.sh@96 -- # nvmftestinit 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:36.350 04:31:45 compress_compdev -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:36.350 04:31:45 compress_compdev -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@432 -- # nvmf_veth_init 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:36:36.350 Cannot find device "nvmf_tgt_br" 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@155 -- # true 00:36:36.350 04:31:45 compress_compdev -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:36:36.610 Cannot find device "nvmf_tgt_br2" 00:36:36.610 04:31:45 compress_compdev -- nvmf/common.sh@156 -- # true 00:36:36.610 04:31:45 compress_compdev -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:36:36.610 04:31:45 compress_compdev -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:36:36.610 Cannot find device "nvmf_tgt_br" 00:36:36.610 04:31:45 compress_compdev -- nvmf/common.sh@158 -- # true 00:36:36.610 04:31:45 compress_compdev -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:36:36.610 Cannot find device "nvmf_tgt_br2" 00:36:36.611 04:31:45 compress_compdev -- nvmf/common.sh@159 -- # true 00:36:36.611 04:31:45 compress_compdev -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:36:36.611 04:31:45 compress_compdev -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:36:36.611 04:31:45 compress_compdev -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:36.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:36.611 04:31:45 compress_compdev -- nvmf/common.sh@162 -- # true 00:36:36.611 04:31:45 compress_compdev -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:36.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:36.611 04:31:45 compress_compdev -- nvmf/common.sh@163 -- # true 00:36:36.611 04:31:45 compress_compdev -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:36:36.611 04:31:45 compress_compdev -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:36.611 04:31:45 compress_compdev -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:36.611 04:31:45 compress_compdev -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:36.611 04:31:45 compress_compdev -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:36.611 04:31:45 compress_compdev -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:36.611 04:31:45 compress_compdev -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:36.611 04:31:45 compress_compdev -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:36:36.611 04:31:45 compress_compdev -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:36:36.871 04:31:45 compress_compdev -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:36:36.871 04:31:45 compress_compdev -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:36:36.871 04:31:45 compress_compdev -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:36:36.871 04:31:45 compress_compdev -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:36:36.871 04:31:45 compress_compdev -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:36.871 04:31:45 compress_compdev -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:36.871 04:31:45 compress_compdev -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:36.871 04:31:45 compress_compdev -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:36:36.871 04:31:45 compress_compdev -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:36:36.871 04:31:45 compress_compdev -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:36:36.871 04:31:45 compress_compdev -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:36.871 04:31:45 compress_compdev -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:36.871 04:31:45 compress_compdev -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:36.871 04:31:45 compress_compdev -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:36.871 04:31:45 compress_compdev -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:36:36.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:36.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:36:36.871 00:36:36.871 --- 10.0.0.2 ping statistics --- 00:36:36.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:36.871 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:36:36.871 04:31:45 compress_compdev -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:36:36.871 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:36.871 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:36:36.871 00:36:36.871 --- 10.0.0.3 ping statistics --- 00:36:36.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:36.871 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:36:36.871 04:31:45 compress_compdev -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:36.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:36.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:36:36.871 00:36:36.871 --- 10.0.0.1 ping statistics --- 00:36:36.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:36.871 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:36:36.871 04:31:45 compress_compdev -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:36.871 04:31:45 compress_compdev -- nvmf/common.sh@433 -- # return 0 00:36:37.131 04:31:45 compress_compdev -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:37.131 04:31:45 compress_compdev -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:37.131 04:31:45 compress_compdev -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:37.131 04:31:45 compress_compdev -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:37.131 04:31:45 compress_compdev -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:37.131 04:31:45 compress_compdev -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:37.131 04:31:45 compress_compdev -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:37.131 04:31:45 compress_compdev -- compress/compress.sh@97 -- # nvmfappstart -m 0x7 00:36:37.131 04:31:45 compress_compdev -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:37.131 04:31:45 compress_compdev -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:37.131 04:31:45 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:36:37.131 04:31:45 compress_compdev -- nvmf/common.sh@481 -- # nvmfpid=2861371 00:36:37.131 04:31:45 compress_compdev -- nvmf/common.sh@482 -- # waitforlisten 2861371 00:36:37.131 04:31:45 compress_compdev -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:36:37.131 04:31:45 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2861371 ']' 00:36:37.131 04:31:45 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:37.131 04:31:45 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:37.131 04:31:45 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:37.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:37.131 04:31:45 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:37.131 04:31:45 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:36:37.131 [2024-07-23 04:31:45.819409] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:36:37.131 [2024-07-23 04:31:45.819523] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3d:01.0 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3d:01.1 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3d:01.2 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3d:01.3 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3d:01.4 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3d:01.5 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3d:01.6 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3d:01.7 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3d:02.0 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3d:02.1 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3d:02.2 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3d:02.3 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3d:02.4 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3d:02.5 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3d:02.6 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3d:02.7 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3f:01.0 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3f:01.1 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3f:01.2 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3f:01.3 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3f:01.4 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3f:01.5 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3f:01.6 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3f:01.7 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3f:02.0 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3f:02.1 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3f:02.2 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3f:02.3 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3f:02.4 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3f:02.5 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3f:02.6 cannot be used 00:36:37.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.391 EAL: Requested device 0000:3f:02.7 cannot be used 00:36:37.391 [2024-07-23 04:31:46.058879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:37.651 [2024-07-23 04:31:46.357321] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:37.651 [2024-07-23 04:31:46.357379] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:37.651 [2024-07-23 04:31:46.357400] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:37.651 [2024-07-23 04:31:46.357416] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:37.651 [2024-07-23 04:31:46.357432] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:37.651 [2024-07-23 04:31:46.357570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:37.651 [2024-07-23 04:31:46.357637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:37.651 [2024-07-23 04:31:46.357643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:38.220 04:31:46 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:38.220 04:31:46 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:36:38.220 04:31:46 compress_compdev -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:38.220 04:31:46 compress_compdev -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:38.220 04:31:46 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:36:38.220 04:31:46 compress_compdev -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:38.220 04:31:46 compress_compdev -- compress/compress.sh@98 -- # trap 'nvmftestfini; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:38.220 04:31:46 compress_compdev -- compress/compress.sh@101 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -u 8192 00:36:38.479 [2024-07-23 04:31:47.094044] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:38.479 04:31:47 compress_compdev -- compress/compress.sh@102 -- # create_vols 00:36:38.479 04:31:47 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:38.479 04:31:47 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:36:41.768 04:31:50 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:36:41.768 04:31:50 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:36:41.768 04:31:50 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:41.768 04:31:50 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:36:41.768 04:31:50 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:41.768 04:31:50 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:41.768 04:31:50 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:41.768 04:31:50 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:36:42.028 [ 00:36:42.028 { 00:36:42.028 "name": "Nvme0n1", 00:36:42.028 "aliases": [ 00:36:42.028 "6082d959-2379-4eab-9ac1-2b649afa03a4" 00:36:42.028 ], 00:36:42.028 "product_name": "NVMe disk", 00:36:42.028 "block_size": 512, 00:36:42.028 "num_blocks": 3907029168, 00:36:42.028 "uuid": "6082d959-2379-4eab-9ac1-2b649afa03a4", 00:36:42.028 "assigned_rate_limits": { 00:36:42.028 "rw_ios_per_sec": 0, 00:36:42.028 "rw_mbytes_per_sec": 0, 00:36:42.028 "r_mbytes_per_sec": 0, 00:36:42.028 "w_mbytes_per_sec": 0 00:36:42.028 }, 00:36:42.028 "claimed": false, 00:36:42.028 "zoned": false, 00:36:42.028 "supported_io_types": { 00:36:42.028 "read": true, 00:36:42.028 "write": true, 00:36:42.028 "unmap": true, 00:36:42.028 "flush": true, 00:36:42.028 "reset": true, 00:36:42.028 "nvme_admin": true, 00:36:42.028 "nvme_io": true, 00:36:42.028 "nvme_io_md": false, 00:36:42.028 "write_zeroes": true, 00:36:42.028 "zcopy": false, 00:36:42.028 "get_zone_info": false, 00:36:42.028 "zone_management": false, 00:36:42.028 "zone_append": false, 00:36:42.028 "compare": false, 00:36:42.028 "compare_and_write": false, 00:36:42.028 "abort": true, 00:36:42.028 "seek_hole": false, 00:36:42.028 "seek_data": false, 00:36:42.028 "copy": false, 00:36:42.028 "nvme_iov_md": false 00:36:42.028 }, 00:36:42.028 "driver_specific": { 00:36:42.028 "nvme": [ 00:36:42.028 { 00:36:42.028 "pci_address": "0000:d8:00.0", 00:36:42.028 "trid": { 00:36:42.028 "trtype": "PCIe", 00:36:42.028 "traddr": "0000:d8:00.0" 00:36:42.028 }, 00:36:42.028 "ctrlr_data": { 00:36:42.028 "cntlid": 0, 00:36:42.028 "vendor_id": "0x8086", 00:36:42.028 "model_number": "INTEL SSDPE2KX020T8", 00:36:42.028 "serial_number": "BTLJ125505KA2P0BGN", 00:36:42.028 "firmware_revision": "VDV10170", 00:36:42.028 "oacs": { 00:36:42.028 "security": 0, 00:36:42.028 "format": 1, 00:36:42.028 "firmware": 1, 00:36:42.028 "ns_manage": 1 00:36:42.028 }, 00:36:42.028 "multi_ctrlr": false, 00:36:42.028 "ana_reporting": false 00:36:42.028 }, 00:36:42.028 "vs": { 00:36:42.028 "nvme_version": "1.2" 00:36:42.028 }, 00:36:42.028 "ns_data": { 00:36:42.028 "id": 1, 00:36:42.028 "can_share": false 00:36:42.028 } 00:36:42.028 } 00:36:42.028 ], 00:36:42.028 "mp_policy": "active_passive" 00:36:42.028 } 00:36:42.028 } 00:36:42.028 ] 00:36:42.028 04:31:50 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:36:42.028 04:31:50 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:36:43.407 858c3288-76bc-4576-bb06-c85674ab4d83 00:36:43.407 04:31:52 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:36:43.666 7bcc6793-0491-4843-a3be-4c1256f29a74 00:36:43.666 04:31:52 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:36:43.666 04:31:52 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:36:43.666 04:31:52 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:43.666 04:31:52 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:36:43.666 04:31:52 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:43.666 04:31:52 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:43.666 04:31:52 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:43.925 04:31:52 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:36:43.925 [ 00:36:43.925 { 00:36:43.925 "name": "7bcc6793-0491-4843-a3be-4c1256f29a74", 00:36:43.925 "aliases": [ 00:36:43.925 "lvs0/lv0" 00:36:43.925 ], 00:36:43.925 "product_name": "Logical Volume", 00:36:43.925 "block_size": 512, 00:36:43.925 "num_blocks": 204800, 00:36:43.925 "uuid": "7bcc6793-0491-4843-a3be-4c1256f29a74", 00:36:43.925 "assigned_rate_limits": { 00:36:43.925 "rw_ios_per_sec": 0, 00:36:43.925 "rw_mbytes_per_sec": 0, 00:36:43.925 "r_mbytes_per_sec": 0, 00:36:43.925 "w_mbytes_per_sec": 0 00:36:43.925 }, 00:36:43.925 "claimed": false, 00:36:43.925 "zoned": false, 00:36:43.925 "supported_io_types": { 00:36:43.925 "read": true, 00:36:43.925 "write": true, 00:36:43.925 "unmap": true, 00:36:43.925 "flush": false, 00:36:43.925 "reset": true, 00:36:43.925 "nvme_admin": false, 00:36:43.925 "nvme_io": false, 00:36:43.925 "nvme_io_md": false, 00:36:43.925 "write_zeroes": true, 00:36:43.925 "zcopy": false, 00:36:43.925 "get_zone_info": false, 00:36:43.925 "zone_management": false, 00:36:43.925 "zone_append": false, 00:36:43.925 "compare": false, 00:36:43.925 "compare_and_write": false, 00:36:43.925 "abort": false, 00:36:43.925 "seek_hole": true, 00:36:43.925 "seek_data": true, 00:36:43.925 "copy": false, 00:36:43.925 "nvme_iov_md": false 00:36:43.925 }, 00:36:43.925 "driver_specific": { 00:36:43.925 "lvol": { 00:36:43.925 "lvol_store_uuid": "858c3288-76bc-4576-bb06-c85674ab4d83", 00:36:43.925 "base_bdev": "Nvme0n1", 00:36:43.925 "thin_provision": true, 00:36:43.925 "num_allocated_clusters": 0, 00:36:43.925 "snapshot": false, 00:36:43.925 "clone": false, 00:36:43.925 "esnap_clone": false 00:36:43.925 } 00:36:43.925 } 00:36:43.925 } 00:36:43.925 ] 00:36:44.184 04:31:52 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:36:44.184 04:31:52 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:36:44.184 04:31:52 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:36:44.184 [2024-07-23 04:31:52.940786] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:36:44.184 COMP_lvs0/lv0 00:36:44.184 04:31:52 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:36:44.184 04:31:52 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:36:44.184 04:31:52 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:44.184 04:31:52 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:36:44.184 04:31:52 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:44.184 04:31:52 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:44.184 04:31:52 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:44.444 04:31:53 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:36:44.702 [ 00:36:44.702 { 00:36:44.702 "name": "COMP_lvs0/lv0", 00:36:44.702 "aliases": [ 00:36:44.702 "a65e2ce4-5f7a-53cc-8ef5-6ba60cdcc97f" 00:36:44.702 ], 00:36:44.702 "product_name": "compress", 00:36:44.702 "block_size": 512, 00:36:44.702 "num_blocks": 200704, 00:36:44.702 "uuid": "a65e2ce4-5f7a-53cc-8ef5-6ba60cdcc97f", 00:36:44.702 "assigned_rate_limits": { 00:36:44.702 "rw_ios_per_sec": 0, 00:36:44.702 "rw_mbytes_per_sec": 0, 00:36:44.702 "r_mbytes_per_sec": 0, 00:36:44.702 "w_mbytes_per_sec": 0 00:36:44.702 }, 00:36:44.702 "claimed": false, 00:36:44.702 "zoned": false, 00:36:44.702 "supported_io_types": { 00:36:44.702 "read": true, 00:36:44.702 "write": true, 00:36:44.702 "unmap": false, 00:36:44.702 "flush": false, 00:36:44.702 "reset": false, 00:36:44.702 "nvme_admin": false, 00:36:44.702 "nvme_io": false, 00:36:44.702 "nvme_io_md": false, 00:36:44.702 "write_zeroes": true, 00:36:44.702 "zcopy": false, 00:36:44.702 "get_zone_info": false, 00:36:44.702 "zone_management": false, 00:36:44.702 "zone_append": false, 00:36:44.702 "compare": false, 00:36:44.702 "compare_and_write": false, 00:36:44.702 "abort": false, 00:36:44.702 "seek_hole": false, 00:36:44.702 "seek_data": false, 00:36:44.702 "copy": false, 00:36:44.702 "nvme_iov_md": false 00:36:44.702 }, 00:36:44.702 "driver_specific": { 00:36:44.702 "compress": { 00:36:44.702 "name": "COMP_lvs0/lv0", 00:36:44.702 "base_bdev_name": "7bcc6793-0491-4843-a3be-4c1256f29a74", 00:36:44.702 "pm_path": "/tmp/pmem/f8ff0c4c-f8fc-4484-983c-7ced7c3dca20" 00:36:44.702 } 00:36:44.702 } 00:36:44.702 } 00:36:44.702 ] 00:36:44.702 04:31:53 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:36:44.702 04:31:53 compress_compdev -- compress/compress.sh@103 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:44.961 04:31:53 compress_compdev -- compress/compress.sh@104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 COMP_lvs0/lv0 00:36:45.220 04:31:53 compress_compdev -- compress/compress.sh@105 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:45.220 [2024-07-23 04:31:53.993389] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:45.479 04:31:54 compress_compdev -- compress/compress.sh@108 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 64 -s 512 -w randrw -t 30 -c 0x18 -M 50 00:36:45.479 04:31:54 compress_compdev -- compress/compress.sh@109 -- # perf_pid=2862743 00:36:45.479 04:31:54 compress_compdev -- compress/compress.sh@112 -- # trap 'killprocess $perf_pid; compress_err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:45.479 04:31:54 compress_compdev -- compress/compress.sh@113 -- # wait 2862743 00:36:45.738 [2024-07-23 04:31:54.306781] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:17.862 Initializing NVMe Controllers 00:37:17.862 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:17.862 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:17.862 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:17.862 Initialization complete. Launching workers. 00:37:17.862 ======================================================== 00:37:17.862 Latency(us) 00:37:17.862 Device Information : IOPS MiB/s Average min max 00:37:17.862 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 4525.83 17.68 14143.11 1919.83 34049.28 00:37:17.862 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 2831.67 11.06 22604.87 3155.70 45596.65 00:37:17.862 ======================================================== 00:37:17.862 Total : 7357.50 28.74 17399.77 1919.83 45596.65 00:37:17.862 00:37:17.862 04:32:24 compress_compdev -- compress/compress.sh@114 -- # destroy_vols 00:37:17.862 04:32:24 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:37:17.862 04:32:24 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:37:17.862 04:32:24 compress_compdev -- compress/compress.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:37:17.862 04:32:24 compress_compdev -- compress/compress.sh@117 -- # nvmftestfini 00:37:17.862 04:32:24 compress_compdev -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:17.862 04:32:24 compress_compdev -- nvmf/common.sh@117 -- # sync 00:37:17.862 04:32:24 compress_compdev -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:17.862 04:32:24 compress_compdev -- nvmf/common.sh@120 -- # set +e 00:37:17.862 04:32:24 compress_compdev -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:17.862 04:32:24 compress_compdev -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:17.862 rmmod nvme_tcp 00:37:17.862 rmmod nvme_fabrics 00:37:17.862 rmmod nvme_keyring 00:37:17.862 04:32:25 compress_compdev -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:17.862 04:32:25 compress_compdev -- nvmf/common.sh@124 -- # set -e 00:37:17.862 04:32:25 compress_compdev -- nvmf/common.sh@125 -- # return 0 00:37:17.862 04:32:25 compress_compdev -- nvmf/common.sh@489 -- # '[' -n 2861371 ']' 00:37:17.862 04:32:25 compress_compdev -- nvmf/common.sh@490 -- # killprocess 2861371 00:37:17.862 04:32:25 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2861371 ']' 00:37:17.862 04:32:25 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2861371 00:37:17.862 04:32:25 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:37:17.862 04:32:25 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:17.862 04:32:25 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2861371 00:37:17.862 04:32:25 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:17.862 04:32:25 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:17.862 04:32:25 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2861371' 00:37:17.862 killing process with pid 2861371 00:37:17.862 04:32:25 compress_compdev -- common/autotest_common.sh@967 -- # kill 2861371 00:37:17.862 04:32:25 compress_compdev -- common/autotest_common.sh@972 -- # wait 2861371 00:37:20.397 04:32:28 compress_compdev -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:20.397 04:32:28 compress_compdev -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:20.397 04:32:28 compress_compdev -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:20.397 04:32:28 compress_compdev -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:20.397 04:32:28 compress_compdev -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:20.397 04:32:28 compress_compdev -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:20.397 04:32:28 compress_compdev -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:20.397 04:32:28 compress_compdev -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:20.397 04:32:28 compress_compdev -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:37:20.397 04:32:28 compress_compdev -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:37:20.397 00:37:20.397 real 2m29.260s 00:37:20.397 user 6m36.554s 00:37:20.397 sys 0m22.012s 00:37:20.397 04:32:28 compress_compdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:20.397 04:32:28 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:37:20.397 ************************************ 00:37:20.397 END TEST compress_compdev 00:37:20.397 ************************************ 00:37:20.397 04:32:29 -- common/autotest_common.sh@1142 -- # return 0 00:37:20.397 04:32:29 -- spdk/autotest.sh@349 -- # run_test compress_isal /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:37:20.397 04:32:29 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:20.397 04:32:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:20.397 04:32:29 -- common/autotest_common.sh@10 -- # set +x 00:37:20.397 ************************************ 00:37:20.397 START TEST compress_isal 00:37:20.397 ************************************ 00:37:20.397 04:32:29 compress_isal -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:37:20.657 * Looking for test storage... 00:37:20.657 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:37:20.657 04:32:29 compress_isal -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@7 -- # uname -s 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:37:20.657 04:32:29 compress_isal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:20.657 04:32:29 compress_isal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:20.657 04:32:29 compress_isal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:20.657 04:32:29 compress_isal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.657 04:32:29 compress_isal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.657 04:32:29 compress_isal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.657 04:32:29 compress_isal -- paths/export.sh@5 -- # export PATH 00:37:20.657 04:32:29 compress_isal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@47 -- # : 0 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:20.657 04:32:29 compress_isal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:20.657 04:32:29 compress_isal -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:37:20.657 04:32:29 compress_isal -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:37:20.658 04:32:29 compress_isal -- compress/compress.sh@82 -- # test_type=isal 00:37:20.658 04:32:29 compress_isal -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:37:20.658 04:32:29 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:37:20.658 04:32:29 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=2868337 00:37:20.658 04:32:29 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:20.658 04:32:29 compress_isal -- compress/compress.sh@73 -- # waitforlisten 2868337 00:37:20.658 04:32:29 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:37:20.658 04:32:29 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2868337 ']' 00:37:20.658 04:32:29 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:20.658 04:32:29 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:20.658 04:32:29 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:20.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:20.658 04:32:29 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:20.658 04:32:29 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:37:20.658 [2024-07-23 04:32:29.362866] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:37:20.658 [2024-07-23 04:32:29.362990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2868337 ] 00:37:20.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.917 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:20.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.917 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:20.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.917 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:20.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.917 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:20.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.918 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:20.918 [2024-07-23 04:32:29.576587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:21.177 [2024-07-23 04:32:29.850956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:21.177 [2024-07-23 04:32:29.850959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:21.746 04:32:30 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:21.746 04:32:30 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:37:21.746 04:32:30 compress_isal -- compress/compress.sh@74 -- # create_vols 00:37:21.746 04:32:30 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:21.746 04:32:30 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:37:25.036 04:32:33 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:37:25.036 04:32:33 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:37:25.036 04:32:33 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:25.036 04:32:33 compress_isal -- common/autotest_common.sh@899 -- # local i 00:37:25.036 04:32:33 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:25.036 04:32:33 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:25.036 04:32:33 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:25.036 04:32:33 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:37:25.294 [ 00:37:25.294 { 00:37:25.294 "name": "Nvme0n1", 00:37:25.294 "aliases": [ 00:37:25.294 "539c7ab1-bf60-47e1-ad6d-9bcd3ce51297" 00:37:25.294 ], 00:37:25.294 "product_name": "NVMe disk", 00:37:25.294 "block_size": 512, 00:37:25.294 "num_blocks": 3907029168, 00:37:25.294 "uuid": "539c7ab1-bf60-47e1-ad6d-9bcd3ce51297", 00:37:25.294 "assigned_rate_limits": { 00:37:25.294 "rw_ios_per_sec": 0, 00:37:25.294 "rw_mbytes_per_sec": 0, 00:37:25.294 "r_mbytes_per_sec": 0, 00:37:25.294 "w_mbytes_per_sec": 0 00:37:25.294 }, 00:37:25.294 "claimed": false, 00:37:25.294 "zoned": false, 00:37:25.294 "supported_io_types": { 00:37:25.294 "read": true, 00:37:25.294 "write": true, 00:37:25.294 "unmap": true, 00:37:25.294 "flush": true, 00:37:25.294 "reset": true, 00:37:25.294 "nvme_admin": true, 00:37:25.294 "nvme_io": true, 00:37:25.294 "nvme_io_md": false, 00:37:25.294 "write_zeroes": true, 00:37:25.294 "zcopy": false, 00:37:25.294 "get_zone_info": false, 00:37:25.294 "zone_management": false, 00:37:25.294 "zone_append": false, 00:37:25.295 "compare": false, 00:37:25.295 "compare_and_write": false, 00:37:25.295 "abort": true, 00:37:25.295 "seek_hole": false, 00:37:25.295 "seek_data": false, 00:37:25.295 "copy": false, 00:37:25.295 "nvme_iov_md": false 00:37:25.295 }, 00:37:25.295 "driver_specific": { 00:37:25.295 "nvme": [ 00:37:25.295 { 00:37:25.295 "pci_address": "0000:d8:00.0", 00:37:25.295 "trid": { 00:37:25.295 "trtype": "PCIe", 00:37:25.295 "traddr": "0000:d8:00.0" 00:37:25.295 }, 00:37:25.295 "ctrlr_data": { 00:37:25.295 "cntlid": 0, 00:37:25.295 "vendor_id": "0x8086", 00:37:25.295 "model_number": "INTEL SSDPE2KX020T8", 00:37:25.295 "serial_number": "BTLJ125505KA2P0BGN", 00:37:25.295 "firmware_revision": "VDV10170", 00:37:25.295 "oacs": { 00:37:25.295 "security": 0, 00:37:25.295 "format": 1, 00:37:25.295 "firmware": 1, 00:37:25.295 "ns_manage": 1 00:37:25.295 }, 00:37:25.295 "multi_ctrlr": false, 00:37:25.295 "ana_reporting": false 00:37:25.295 }, 00:37:25.295 "vs": { 00:37:25.295 "nvme_version": "1.2" 00:37:25.295 }, 00:37:25.295 "ns_data": { 00:37:25.295 "id": 1, 00:37:25.295 "can_share": false 00:37:25.295 } 00:37:25.295 } 00:37:25.295 ], 00:37:25.295 "mp_policy": "active_passive" 00:37:25.295 } 00:37:25.295 } 00:37:25.295 ] 00:37:25.295 04:32:33 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:37:25.295 04:32:33 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:37:26.672 1fa59015-d5d3-4575-905e-059a18e227aa 00:37:26.672 04:32:35 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:37:26.931 61179dbe-1346-4d47-b14a-763675bcc87e 00:37:26.931 04:32:35 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:37:26.931 04:32:35 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:37:26.931 04:32:35 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:26.931 04:32:35 compress_isal -- common/autotest_common.sh@899 -- # local i 00:37:26.931 04:32:35 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:26.931 04:32:35 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:26.931 04:32:35 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:26.931 04:32:35 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:37:27.191 [ 00:37:27.191 { 00:37:27.191 "name": "61179dbe-1346-4d47-b14a-763675bcc87e", 00:37:27.191 "aliases": [ 00:37:27.191 "lvs0/lv0" 00:37:27.191 ], 00:37:27.191 "product_name": "Logical Volume", 00:37:27.191 "block_size": 512, 00:37:27.191 "num_blocks": 204800, 00:37:27.191 "uuid": "61179dbe-1346-4d47-b14a-763675bcc87e", 00:37:27.191 "assigned_rate_limits": { 00:37:27.191 "rw_ios_per_sec": 0, 00:37:27.191 "rw_mbytes_per_sec": 0, 00:37:27.191 "r_mbytes_per_sec": 0, 00:37:27.191 "w_mbytes_per_sec": 0 00:37:27.191 }, 00:37:27.191 "claimed": false, 00:37:27.191 "zoned": false, 00:37:27.191 "supported_io_types": { 00:37:27.191 "read": true, 00:37:27.191 "write": true, 00:37:27.191 "unmap": true, 00:37:27.191 "flush": false, 00:37:27.191 "reset": true, 00:37:27.191 "nvme_admin": false, 00:37:27.191 "nvme_io": false, 00:37:27.191 "nvme_io_md": false, 00:37:27.191 "write_zeroes": true, 00:37:27.191 "zcopy": false, 00:37:27.191 "get_zone_info": false, 00:37:27.191 "zone_management": false, 00:37:27.191 "zone_append": false, 00:37:27.191 "compare": false, 00:37:27.191 "compare_and_write": false, 00:37:27.191 "abort": false, 00:37:27.191 "seek_hole": true, 00:37:27.191 "seek_data": true, 00:37:27.191 "copy": false, 00:37:27.191 "nvme_iov_md": false 00:37:27.191 }, 00:37:27.191 "driver_specific": { 00:37:27.191 "lvol": { 00:37:27.191 "lvol_store_uuid": "1fa59015-d5d3-4575-905e-059a18e227aa", 00:37:27.191 "base_bdev": "Nvme0n1", 00:37:27.191 "thin_provision": true, 00:37:27.191 "num_allocated_clusters": 0, 00:37:27.191 "snapshot": false, 00:37:27.191 "clone": false, 00:37:27.191 "esnap_clone": false 00:37:27.191 } 00:37:27.191 } 00:37:27.191 } 00:37:27.191 ] 00:37:27.191 04:32:35 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:37:27.191 04:32:35 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:37:27.191 04:32:35 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:37:27.450 [2024-07-23 04:32:36.174232] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:37:27.450 COMP_lvs0/lv0 00:37:27.450 04:32:36 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:37:27.450 04:32:36 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:37:27.450 04:32:36 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:27.450 04:32:36 compress_isal -- common/autotest_common.sh@899 -- # local i 00:37:27.450 04:32:36 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:27.450 04:32:36 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:27.450 04:32:36 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:27.709 04:32:36 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:37:27.969 [ 00:37:27.969 { 00:37:27.969 "name": "COMP_lvs0/lv0", 00:37:27.969 "aliases": [ 00:37:27.969 "612cdbbf-0878-5824-8305-c6c6cd472d14" 00:37:27.969 ], 00:37:27.969 "product_name": "compress", 00:37:27.969 "block_size": 512, 00:37:27.969 "num_blocks": 200704, 00:37:27.969 "uuid": "612cdbbf-0878-5824-8305-c6c6cd472d14", 00:37:27.969 "assigned_rate_limits": { 00:37:27.969 "rw_ios_per_sec": 0, 00:37:27.969 "rw_mbytes_per_sec": 0, 00:37:27.969 "r_mbytes_per_sec": 0, 00:37:27.969 "w_mbytes_per_sec": 0 00:37:27.969 }, 00:37:27.969 "claimed": false, 00:37:27.969 "zoned": false, 00:37:27.969 "supported_io_types": { 00:37:27.969 "read": true, 00:37:27.969 "write": true, 00:37:27.969 "unmap": false, 00:37:27.969 "flush": false, 00:37:27.969 "reset": false, 00:37:27.969 "nvme_admin": false, 00:37:27.969 "nvme_io": false, 00:37:27.969 "nvme_io_md": false, 00:37:27.969 "write_zeroes": true, 00:37:27.969 "zcopy": false, 00:37:27.969 "get_zone_info": false, 00:37:27.969 "zone_management": false, 00:37:27.969 "zone_append": false, 00:37:27.969 "compare": false, 00:37:27.969 "compare_and_write": false, 00:37:27.969 "abort": false, 00:37:27.969 "seek_hole": false, 00:37:27.969 "seek_data": false, 00:37:27.969 "copy": false, 00:37:27.969 "nvme_iov_md": false 00:37:27.969 }, 00:37:27.969 "driver_specific": { 00:37:27.969 "compress": { 00:37:27.969 "name": "COMP_lvs0/lv0", 00:37:27.969 "base_bdev_name": "61179dbe-1346-4d47-b14a-763675bcc87e", 00:37:27.969 "pm_path": "/tmp/pmem/0771b499-b079-45d2-9708-1f68641c0a86" 00:37:27.969 } 00:37:27.969 } 00:37:27.969 } 00:37:27.969 ] 00:37:27.969 04:32:36 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:37:27.969 04:32:36 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:37:28.228 Running I/O for 3 seconds... 00:37:31.519 00:37:31.519 Latency(us) 00:37:31.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:31.519 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:37:31.519 Verification LBA range: start 0x0 length 0x3100 00:37:31.519 COMP_lvs0/lv0 : 3.01 3232.03 12.63 0.00 0.00 9843.15 63.08 15309.21 00:37:31.519 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:37:31.519 Verification LBA range: start 0x3100 length 0x3100 00:37:31.519 COMP_lvs0/lv0 : 3.01 3254.64 12.71 0.00 0.00 9782.06 61.44 14784.92 00:37:31.519 =================================================================================================================== 00:37:31.519 Total : 6486.67 25.34 0.00 0.00 9812.50 61.44 15309.21 00:37:31.519 0 00:37:31.519 04:32:39 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:37:31.519 04:32:39 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:37:31.519 04:32:40 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:37:31.519 04:32:40 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:37:31.519 04:32:40 compress_isal -- compress/compress.sh@78 -- # killprocess 2868337 00:37:31.519 04:32:40 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2868337 ']' 00:37:31.519 04:32:40 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2868337 00:37:31.519 04:32:40 compress_isal -- common/autotest_common.sh@953 -- # uname 00:37:31.519 04:32:40 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:31.519 04:32:40 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2868337 00:37:31.778 04:32:40 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:31.778 04:32:40 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:31.778 04:32:40 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2868337' 00:37:31.778 killing process with pid 2868337 00:37:31.778 04:32:40 compress_isal -- common/autotest_common.sh@967 -- # kill 2868337 00:37:31.778 Received shutdown signal, test time was about 3.000000 seconds 00:37:31.778 00:37:31.778 Latency(us) 00:37:31.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:31.778 =================================================================================================================== 00:37:31.778 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:31.778 04:32:40 compress_isal -- common/autotest_common.sh@972 -- # wait 2868337 00:37:36.010 04:32:44 compress_isal -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:37:36.010 04:32:44 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:37:36.010 04:32:44 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=2870816 00:37:36.010 04:32:44 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:36.010 04:32:44 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:37:36.010 04:32:44 compress_isal -- compress/compress.sh@73 -- # waitforlisten 2870816 00:37:36.010 04:32:44 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2870816 ']' 00:37:36.010 04:32:44 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:36.010 04:32:44 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:36.010 04:32:44 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:36.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:36.010 04:32:44 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:36.010 04:32:44 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:37:36.010 [2024-07-23 04:32:44.290586] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:37:36.010 [2024-07-23 04:32:44.290711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2870816 ] 00:37:36.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.010 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:36.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.010 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:36.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.010 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:36.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.010 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:36.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.010 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:36.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.010 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:36.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.010 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:36.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.010 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:36.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.010 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:36.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.010 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:36.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.010 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:36.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.010 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:36.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.010 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:36.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.011 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:36.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.011 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:36.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.011 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:36.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.011 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:36.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.011 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:36.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.011 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:36.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.011 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:36.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.011 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:36.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.011 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:36.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.011 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:36.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.011 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:36.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.011 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:36.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.011 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:36.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.011 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:36.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.011 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:36.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.011 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:36.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.011 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:36.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.011 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:36.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:36.011 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:36.011 [2024-07-23 04:32:44.505519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:36.011 [2024-07-23 04:32:44.783948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:36.011 [2024-07-23 04:32:44.783951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:36.580 04:32:45 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:36.580 04:32:45 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:37:36.580 04:32:45 compress_isal -- compress/compress.sh@74 -- # create_vols 512 00:37:36.580 04:32:45 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:37:36.580 04:32:45 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:39.870 04:32:48 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:37:39.870 04:32:48 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:37:39.870 04:32:48 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:39.870 04:32:48 compress_isal -- common/autotest_common.sh@899 -- # local i 00:37:39.870 04:32:48 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:39.870 04:32:48 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:39.870 04:32:48 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:40.129 04:32:48 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:37:40.388 [ 00:37:40.388 { 00:37:40.388 "name": "Nvme0n1", 00:37:40.388 "aliases": [ 00:37:40.388 "928e10ee-a739-4ee9-9ce9-bdb47f70d423" 00:37:40.388 ], 00:37:40.388 "product_name": "NVMe disk", 00:37:40.388 "block_size": 512, 00:37:40.388 "num_blocks": 3907029168, 00:37:40.388 "uuid": "928e10ee-a739-4ee9-9ce9-bdb47f70d423", 00:37:40.388 "assigned_rate_limits": { 00:37:40.388 "rw_ios_per_sec": 0, 00:37:40.388 "rw_mbytes_per_sec": 0, 00:37:40.388 "r_mbytes_per_sec": 0, 00:37:40.388 "w_mbytes_per_sec": 0 00:37:40.388 }, 00:37:40.388 "claimed": false, 00:37:40.388 "zoned": false, 00:37:40.388 "supported_io_types": { 00:37:40.388 "read": true, 00:37:40.388 "write": true, 00:37:40.388 "unmap": true, 00:37:40.388 "flush": true, 00:37:40.388 "reset": true, 00:37:40.388 "nvme_admin": true, 00:37:40.388 "nvme_io": true, 00:37:40.388 "nvme_io_md": false, 00:37:40.388 "write_zeroes": true, 00:37:40.388 "zcopy": false, 00:37:40.388 "get_zone_info": false, 00:37:40.388 "zone_management": false, 00:37:40.388 "zone_append": false, 00:37:40.388 "compare": false, 00:37:40.388 "compare_and_write": false, 00:37:40.388 "abort": true, 00:37:40.388 "seek_hole": false, 00:37:40.388 "seek_data": false, 00:37:40.388 "copy": false, 00:37:40.388 "nvme_iov_md": false 00:37:40.388 }, 00:37:40.388 "driver_specific": { 00:37:40.388 "nvme": [ 00:37:40.388 { 00:37:40.388 "pci_address": "0000:d8:00.0", 00:37:40.388 "trid": { 00:37:40.388 "trtype": "PCIe", 00:37:40.388 "traddr": "0000:d8:00.0" 00:37:40.388 }, 00:37:40.388 "ctrlr_data": { 00:37:40.388 "cntlid": 0, 00:37:40.388 "vendor_id": "0x8086", 00:37:40.388 "model_number": "INTEL SSDPE2KX020T8", 00:37:40.388 "serial_number": "BTLJ125505KA2P0BGN", 00:37:40.388 "firmware_revision": "VDV10170", 00:37:40.388 "oacs": { 00:37:40.388 "security": 0, 00:37:40.388 "format": 1, 00:37:40.388 "firmware": 1, 00:37:40.388 "ns_manage": 1 00:37:40.388 }, 00:37:40.388 "multi_ctrlr": false, 00:37:40.388 "ana_reporting": false 00:37:40.388 }, 00:37:40.388 "vs": { 00:37:40.388 "nvme_version": "1.2" 00:37:40.388 }, 00:37:40.388 "ns_data": { 00:37:40.388 "id": 1, 00:37:40.388 "can_share": false 00:37:40.388 } 00:37:40.388 } 00:37:40.388 ], 00:37:40.388 "mp_policy": "active_passive" 00:37:40.388 } 00:37:40.388 } 00:37:40.388 ] 00:37:40.388 04:32:48 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:37:40.388 04:32:48 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:37:41.766 deefe727-21fe-4888-90d4-518b47c1251c 00:37:41.766 04:32:50 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:37:41.766 528ba041-8b84-4936-9d47-ead823180238 00:37:41.766 04:32:50 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:37:41.766 04:32:50 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:37:41.766 04:32:50 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:41.766 04:32:50 compress_isal -- common/autotest_common.sh@899 -- # local i 00:37:41.766 04:32:50 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:41.766 04:32:50 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:41.766 04:32:50 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:42.026 04:32:50 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:37:42.285 [ 00:37:42.285 { 00:37:42.285 "name": "528ba041-8b84-4936-9d47-ead823180238", 00:37:42.285 "aliases": [ 00:37:42.285 "lvs0/lv0" 00:37:42.285 ], 00:37:42.285 "product_name": "Logical Volume", 00:37:42.285 "block_size": 512, 00:37:42.285 "num_blocks": 204800, 00:37:42.285 "uuid": "528ba041-8b84-4936-9d47-ead823180238", 00:37:42.285 "assigned_rate_limits": { 00:37:42.285 "rw_ios_per_sec": 0, 00:37:42.285 "rw_mbytes_per_sec": 0, 00:37:42.285 "r_mbytes_per_sec": 0, 00:37:42.285 "w_mbytes_per_sec": 0 00:37:42.285 }, 00:37:42.285 "claimed": false, 00:37:42.285 "zoned": false, 00:37:42.285 "supported_io_types": { 00:37:42.285 "read": true, 00:37:42.285 "write": true, 00:37:42.285 "unmap": true, 00:37:42.285 "flush": false, 00:37:42.285 "reset": true, 00:37:42.285 "nvme_admin": false, 00:37:42.285 "nvme_io": false, 00:37:42.285 "nvme_io_md": false, 00:37:42.285 "write_zeroes": true, 00:37:42.285 "zcopy": false, 00:37:42.285 "get_zone_info": false, 00:37:42.285 "zone_management": false, 00:37:42.285 "zone_append": false, 00:37:42.285 "compare": false, 00:37:42.285 "compare_and_write": false, 00:37:42.285 "abort": false, 00:37:42.285 "seek_hole": true, 00:37:42.285 "seek_data": true, 00:37:42.285 "copy": false, 00:37:42.285 "nvme_iov_md": false 00:37:42.285 }, 00:37:42.285 "driver_specific": { 00:37:42.285 "lvol": { 00:37:42.285 "lvol_store_uuid": "deefe727-21fe-4888-90d4-518b47c1251c", 00:37:42.285 "base_bdev": "Nvme0n1", 00:37:42.285 "thin_provision": true, 00:37:42.285 "num_allocated_clusters": 0, 00:37:42.285 "snapshot": false, 00:37:42.285 "clone": false, 00:37:42.285 "esnap_clone": false 00:37:42.285 } 00:37:42.285 } 00:37:42.285 } 00:37:42.285 ] 00:37:42.285 04:32:50 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:37:42.285 04:32:50 compress_isal -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:37:42.285 04:32:50 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:37:42.545 [2024-07-23 04:32:51.114264] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:37:42.545 COMP_lvs0/lv0 00:37:42.545 04:32:51 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:37:42.545 04:32:51 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:37:42.545 04:32:51 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:42.545 04:32:51 compress_isal -- common/autotest_common.sh@899 -- # local i 00:37:42.545 04:32:51 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:42.545 04:32:51 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:42.545 04:32:51 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:42.804 04:32:51 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:37:42.804 [ 00:37:42.804 { 00:37:42.804 "name": "COMP_lvs0/lv0", 00:37:42.804 "aliases": [ 00:37:42.804 "24044c7e-011b-5e31-bc97-a6e3b49aa834" 00:37:42.804 ], 00:37:42.804 "product_name": "compress", 00:37:42.804 "block_size": 512, 00:37:42.804 "num_blocks": 200704, 00:37:42.804 "uuid": "24044c7e-011b-5e31-bc97-a6e3b49aa834", 00:37:42.804 "assigned_rate_limits": { 00:37:42.804 "rw_ios_per_sec": 0, 00:37:42.804 "rw_mbytes_per_sec": 0, 00:37:42.804 "r_mbytes_per_sec": 0, 00:37:42.804 "w_mbytes_per_sec": 0 00:37:42.804 }, 00:37:42.804 "claimed": false, 00:37:42.804 "zoned": false, 00:37:42.804 "supported_io_types": { 00:37:42.804 "read": true, 00:37:42.804 "write": true, 00:37:42.804 "unmap": false, 00:37:42.804 "flush": false, 00:37:42.804 "reset": false, 00:37:42.804 "nvme_admin": false, 00:37:42.804 "nvme_io": false, 00:37:42.804 "nvme_io_md": false, 00:37:42.804 "write_zeroes": true, 00:37:42.804 "zcopy": false, 00:37:42.804 "get_zone_info": false, 00:37:42.804 "zone_management": false, 00:37:42.804 "zone_append": false, 00:37:42.804 "compare": false, 00:37:42.804 "compare_and_write": false, 00:37:42.804 "abort": false, 00:37:42.804 "seek_hole": false, 00:37:42.804 "seek_data": false, 00:37:42.804 "copy": false, 00:37:42.804 "nvme_iov_md": false 00:37:42.804 }, 00:37:42.804 "driver_specific": { 00:37:42.804 "compress": { 00:37:42.804 "name": "COMP_lvs0/lv0", 00:37:42.804 "base_bdev_name": "528ba041-8b84-4936-9d47-ead823180238", 00:37:42.804 "pm_path": "/tmp/pmem/5659ded6-6433-4883-ad26-611b0a3ae4d3" 00:37:42.804 } 00:37:42.804 } 00:37:42.804 } 00:37:42.804 ] 00:37:43.063 04:32:51 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:37:43.063 04:32:51 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:37:43.063 Running I/O for 3 seconds... 00:37:46.352 00:37:46.352 Latency(us) 00:37:46.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:46.352 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:37:46.352 Verification LBA range: start 0x0 length 0x3100 00:37:46.352 COMP_lvs0/lv0 : 3.00 3259.12 12.73 0.00 0.00 9764.34 63.49 17196.65 00:37:46.352 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:37:46.352 Verification LBA range: start 0x3100 length 0x3100 00:37:46.352 COMP_lvs0/lv0 : 3.01 3254.87 12.71 0.00 0.00 9787.80 61.44 16777.22 00:37:46.352 =================================================================================================================== 00:37:46.352 Total : 6513.99 25.45 0.00 0.00 9776.06 61.44 17196.65 00:37:46.352 0 00:37:46.352 04:32:54 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:37:46.352 04:32:54 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:37:46.352 04:32:55 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:37:46.612 04:32:55 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:37:46.612 04:32:55 compress_isal -- compress/compress.sh@78 -- # killprocess 2870816 00:37:46.612 04:32:55 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2870816 ']' 00:37:46.612 04:32:55 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2870816 00:37:46.612 04:32:55 compress_isal -- common/autotest_common.sh@953 -- # uname 00:37:46.612 04:32:55 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:46.612 04:32:55 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2870816 00:37:46.612 04:32:55 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:46.612 04:32:55 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:46.612 04:32:55 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2870816' 00:37:46.612 killing process with pid 2870816 00:37:46.612 04:32:55 compress_isal -- common/autotest_common.sh@967 -- # kill 2870816 00:37:46.612 Received shutdown signal, test time was about 3.000000 seconds 00:37:46.612 00:37:46.612 Latency(us) 00:37:46.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:46.612 =================================================================================================================== 00:37:46.612 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:46.612 04:32:55 compress_isal -- common/autotest_common.sh@972 -- # wait 2870816 00:37:50.806 04:32:59 compress_isal -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:37:50.806 04:32:59 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:37:50.806 04:32:59 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=2873364 00:37:50.806 04:32:59 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:50.806 04:32:59 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:37:50.806 04:32:59 compress_isal -- compress/compress.sh@73 -- # waitforlisten 2873364 00:37:50.806 04:32:59 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2873364 ']' 00:37:50.806 04:32:59 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:50.806 04:32:59 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:50.806 04:32:59 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:50.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:50.806 04:32:59 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:50.806 04:32:59 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:37:50.806 [2024-07-23 04:32:59.447657] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:37:50.806 [2024-07-23 04:32:59.447785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2873364 ] 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:50.806 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.806 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:51.065 [2024-07-23 04:32:59.660550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:51.324 [2024-07-23 04:32:59.929087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:51.324 [2024-07-23 04:32:59.929095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:51.892 04:33:00 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:51.892 04:33:00 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:37:51.892 04:33:00 compress_isal -- compress/compress.sh@74 -- # create_vols 4096 00:37:51.892 04:33:00 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:51.892 04:33:00 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:37:55.243 04:33:03 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:37:55.243 04:33:03 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:37:55.243 04:33:03 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:55.243 04:33:03 compress_isal -- common/autotest_common.sh@899 -- # local i 00:37:55.243 04:33:03 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:55.243 04:33:03 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:55.244 04:33:03 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:55.244 04:33:03 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:37:55.502 [ 00:37:55.502 { 00:37:55.502 "name": "Nvme0n1", 00:37:55.502 "aliases": [ 00:37:55.502 "050b1266-326e-4134-bba8-5fe243d69d14" 00:37:55.502 ], 00:37:55.502 "product_name": "NVMe disk", 00:37:55.502 "block_size": 512, 00:37:55.502 "num_blocks": 3907029168, 00:37:55.502 "uuid": "050b1266-326e-4134-bba8-5fe243d69d14", 00:37:55.502 "assigned_rate_limits": { 00:37:55.502 "rw_ios_per_sec": 0, 00:37:55.502 "rw_mbytes_per_sec": 0, 00:37:55.502 "r_mbytes_per_sec": 0, 00:37:55.502 "w_mbytes_per_sec": 0 00:37:55.502 }, 00:37:55.502 "claimed": false, 00:37:55.502 "zoned": false, 00:37:55.502 "supported_io_types": { 00:37:55.502 "read": true, 00:37:55.502 "write": true, 00:37:55.502 "unmap": true, 00:37:55.502 "flush": true, 00:37:55.502 "reset": true, 00:37:55.502 "nvme_admin": true, 00:37:55.502 "nvme_io": true, 00:37:55.502 "nvme_io_md": false, 00:37:55.502 "write_zeroes": true, 00:37:55.502 "zcopy": false, 00:37:55.502 "get_zone_info": false, 00:37:55.502 "zone_management": false, 00:37:55.502 "zone_append": false, 00:37:55.502 "compare": false, 00:37:55.502 "compare_and_write": false, 00:37:55.502 "abort": true, 00:37:55.502 "seek_hole": false, 00:37:55.502 "seek_data": false, 00:37:55.502 "copy": false, 00:37:55.502 "nvme_iov_md": false 00:37:55.502 }, 00:37:55.502 "driver_specific": { 00:37:55.502 "nvme": [ 00:37:55.502 { 00:37:55.502 "pci_address": "0000:d8:00.0", 00:37:55.502 "trid": { 00:37:55.502 "trtype": "PCIe", 00:37:55.502 "traddr": "0000:d8:00.0" 00:37:55.502 }, 00:37:55.502 "ctrlr_data": { 00:37:55.502 "cntlid": 0, 00:37:55.502 "vendor_id": "0x8086", 00:37:55.502 "model_number": "INTEL SSDPE2KX020T8", 00:37:55.502 "serial_number": "BTLJ125505KA2P0BGN", 00:37:55.502 "firmware_revision": "VDV10170", 00:37:55.502 "oacs": { 00:37:55.502 "security": 0, 00:37:55.502 "format": 1, 00:37:55.502 "firmware": 1, 00:37:55.502 "ns_manage": 1 00:37:55.502 }, 00:37:55.502 "multi_ctrlr": false, 00:37:55.502 "ana_reporting": false 00:37:55.502 }, 00:37:55.502 "vs": { 00:37:55.502 "nvme_version": "1.2" 00:37:55.502 }, 00:37:55.502 "ns_data": { 00:37:55.502 "id": 1, 00:37:55.502 "can_share": false 00:37:55.502 } 00:37:55.502 } 00:37:55.502 ], 00:37:55.502 "mp_policy": "active_passive" 00:37:55.502 } 00:37:55.502 } 00:37:55.502 ] 00:37:55.502 04:33:04 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:37:55.502 04:33:04 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:37:56.880 60f14ca4-8de7-4b84-948a-0916429939cc 00:37:56.880 04:33:05 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:37:56.880 e47bdf37-c993-4c7b-82fb-3422a4707211 00:37:56.880 04:33:05 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:37:56.880 04:33:05 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:37:56.880 04:33:05 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:56.880 04:33:05 compress_isal -- common/autotest_common.sh@899 -- # local i 00:37:56.880 04:33:05 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:56.880 04:33:05 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:56.880 04:33:05 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:57.138 04:33:05 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:37:57.397 [ 00:37:57.397 { 00:37:57.397 "name": "e47bdf37-c993-4c7b-82fb-3422a4707211", 00:37:57.397 "aliases": [ 00:37:57.397 "lvs0/lv0" 00:37:57.397 ], 00:37:57.397 "product_name": "Logical Volume", 00:37:57.397 "block_size": 512, 00:37:57.397 "num_blocks": 204800, 00:37:57.397 "uuid": "e47bdf37-c993-4c7b-82fb-3422a4707211", 00:37:57.397 "assigned_rate_limits": { 00:37:57.397 "rw_ios_per_sec": 0, 00:37:57.397 "rw_mbytes_per_sec": 0, 00:37:57.397 "r_mbytes_per_sec": 0, 00:37:57.397 "w_mbytes_per_sec": 0 00:37:57.397 }, 00:37:57.397 "claimed": false, 00:37:57.397 "zoned": false, 00:37:57.397 "supported_io_types": { 00:37:57.397 "read": true, 00:37:57.397 "write": true, 00:37:57.397 "unmap": true, 00:37:57.397 "flush": false, 00:37:57.397 "reset": true, 00:37:57.397 "nvme_admin": false, 00:37:57.397 "nvme_io": false, 00:37:57.397 "nvme_io_md": false, 00:37:57.397 "write_zeroes": true, 00:37:57.397 "zcopy": false, 00:37:57.397 "get_zone_info": false, 00:37:57.397 "zone_management": false, 00:37:57.397 "zone_append": false, 00:37:57.397 "compare": false, 00:37:57.397 "compare_and_write": false, 00:37:57.397 "abort": false, 00:37:57.397 "seek_hole": true, 00:37:57.397 "seek_data": true, 00:37:57.397 "copy": false, 00:37:57.397 "nvme_iov_md": false 00:37:57.397 }, 00:37:57.397 "driver_specific": { 00:37:57.397 "lvol": { 00:37:57.397 "lvol_store_uuid": "60f14ca4-8de7-4b84-948a-0916429939cc", 00:37:57.397 "base_bdev": "Nvme0n1", 00:37:57.397 "thin_provision": true, 00:37:57.397 "num_allocated_clusters": 0, 00:37:57.397 "snapshot": false, 00:37:57.397 "clone": false, 00:37:57.397 "esnap_clone": false 00:37:57.397 } 00:37:57.397 } 00:37:57.397 } 00:37:57.397 ] 00:37:57.397 04:33:06 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:37:57.397 04:33:06 compress_isal -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:37:57.397 04:33:06 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:37:57.655 [2024-07-23 04:33:06.253963] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:37:57.655 COMP_lvs0/lv0 00:37:57.655 04:33:06 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:37:57.655 04:33:06 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:37:57.655 04:33:06 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:57.655 04:33:06 compress_isal -- common/autotest_common.sh@899 -- # local i 00:37:57.655 04:33:06 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:57.655 04:33:06 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:57.655 04:33:06 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:57.914 04:33:06 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:37:58.173 [ 00:37:58.173 { 00:37:58.173 "name": "COMP_lvs0/lv0", 00:37:58.173 "aliases": [ 00:37:58.173 "051e10a9-bf1c-5bd0-8230-879e5832b008" 00:37:58.173 ], 00:37:58.173 "product_name": "compress", 00:37:58.173 "block_size": 4096, 00:37:58.173 "num_blocks": 25088, 00:37:58.173 "uuid": "051e10a9-bf1c-5bd0-8230-879e5832b008", 00:37:58.173 "assigned_rate_limits": { 00:37:58.173 "rw_ios_per_sec": 0, 00:37:58.173 "rw_mbytes_per_sec": 0, 00:37:58.173 "r_mbytes_per_sec": 0, 00:37:58.173 "w_mbytes_per_sec": 0 00:37:58.173 }, 00:37:58.173 "claimed": false, 00:37:58.173 "zoned": false, 00:37:58.173 "supported_io_types": { 00:37:58.173 "read": true, 00:37:58.173 "write": true, 00:37:58.173 "unmap": false, 00:37:58.173 "flush": false, 00:37:58.173 "reset": false, 00:37:58.173 "nvme_admin": false, 00:37:58.173 "nvme_io": false, 00:37:58.173 "nvme_io_md": false, 00:37:58.173 "write_zeroes": true, 00:37:58.173 "zcopy": false, 00:37:58.173 "get_zone_info": false, 00:37:58.173 "zone_management": false, 00:37:58.173 "zone_append": false, 00:37:58.173 "compare": false, 00:37:58.173 "compare_and_write": false, 00:37:58.173 "abort": false, 00:37:58.173 "seek_hole": false, 00:37:58.173 "seek_data": false, 00:37:58.173 "copy": false, 00:37:58.173 "nvme_iov_md": false 00:37:58.173 }, 00:37:58.173 "driver_specific": { 00:37:58.173 "compress": { 00:37:58.173 "name": "COMP_lvs0/lv0", 00:37:58.173 "base_bdev_name": "e47bdf37-c993-4c7b-82fb-3422a4707211", 00:37:58.173 "pm_path": "/tmp/pmem/a5c2104a-ad8e-4d70-960e-34fb4af257f6" 00:37:58.173 } 00:37:58.173 } 00:37:58.173 } 00:37:58.173 ] 00:37:58.173 04:33:06 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:37:58.173 04:33:06 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:37:58.173 Running I/O for 3 seconds... 00:38:01.458 00:38:01.458 Latency(us) 00:38:01.458 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:01.458 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:38:01.458 Verification LBA range: start 0x0 length 0x3100 00:38:01.458 COMP_lvs0/lv0 : 3.01 3280.56 12.81 0.00 0.00 9694.22 62.26 14889.78 00:38:01.458 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:38:01.458 Verification LBA range: start 0x3100 length 0x3100 00:38:01.458 COMP_lvs0/lv0 : 3.01 3288.50 12.85 0.00 0.00 9668.76 61.44 15099.49 00:38:01.458 =================================================================================================================== 00:38:01.458 Total : 6569.05 25.66 0.00 0.00 9681.47 61.44 15099.49 00:38:01.458 0 00:38:01.458 04:33:09 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:38:01.458 04:33:09 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:38:01.458 04:33:10 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:38:01.716 04:33:10 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:38:01.716 04:33:10 compress_isal -- compress/compress.sh@78 -- # killprocess 2873364 00:38:01.716 04:33:10 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2873364 ']' 00:38:01.716 04:33:10 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2873364 00:38:01.716 04:33:10 compress_isal -- common/autotest_common.sh@953 -- # uname 00:38:01.716 04:33:10 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:01.716 04:33:10 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2873364 00:38:01.716 04:33:10 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:01.716 04:33:10 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:01.716 04:33:10 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2873364' 00:38:01.716 killing process with pid 2873364 00:38:01.716 04:33:10 compress_isal -- common/autotest_common.sh@967 -- # kill 2873364 00:38:01.716 Received shutdown signal, test time was about 3.000000 seconds 00:38:01.716 00:38:01.717 Latency(us) 00:38:01.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:01.717 =================================================================================================================== 00:38:01.717 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:01.717 04:33:10 compress_isal -- common/autotest_common.sh@972 -- # wait 2873364 00:38:05.907 04:33:14 compress_isal -- compress/compress.sh@89 -- # run_bdevio 00:38:05.907 04:33:14 compress_isal -- compress/compress.sh@50 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:38:05.907 04:33:14 compress_isal -- compress/compress.sh@55 -- # bdevio_pid=2876306 00:38:05.907 04:33:14 compress_isal -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:05.907 04:33:14 compress_isal -- compress/compress.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w 00:38:05.907 04:33:14 compress_isal -- compress/compress.sh@57 -- # waitforlisten 2876306 00:38:05.907 04:33:14 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2876306 ']' 00:38:05.907 04:33:14 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:05.907 04:33:14 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:05.907 04:33:14 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:05.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:05.907 04:33:14 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:05.907 04:33:14 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:38:05.907 [2024-07-23 04:33:14.542271] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:05.907 [2024-07-23 04:33:14.542394] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2876306 ] 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3d:01.0 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3d:01.1 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3d:01.2 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3d:01.3 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3d:01.4 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3d:01.5 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3d:01.6 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3d:01.7 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3d:02.0 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3d:02.1 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3d:02.2 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3d:02.3 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3d:02.4 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3d:02.5 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3d:02.6 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3d:02.7 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3f:01.0 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3f:01.1 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3f:01.2 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3f:01.3 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3f:01.4 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3f:01.5 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3f:01.6 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3f:01.7 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3f:02.0 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3f:02.1 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3f:02.2 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3f:02.3 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3f:02.4 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3f:02.5 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3f:02.6 cannot be used 00:38:05.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:05.907 EAL: Requested device 0000:3f:02.7 cannot be used 00:38:06.166 [2024-07-23 04:33:14.767942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:06.425 [2024-07-23 04:33:15.063477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:06.425 [2024-07-23 04:33:15.063544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:06.425 [2024-07-23 04:33:15.063549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:06.993 04:33:15 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:06.993 04:33:15 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:38:06.993 04:33:15 compress_isal -- compress/compress.sh@58 -- # create_vols 00:38:06.993 04:33:15 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:38:06.993 04:33:15 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:38:10.282 04:33:18 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:38:10.282 04:33:18 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:38:10.282 04:33:18 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:38:10.282 04:33:18 compress_isal -- common/autotest_common.sh@899 -- # local i 00:38:10.282 04:33:18 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:38:10.282 04:33:18 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:38:10.282 04:33:18 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:10.282 04:33:19 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:38:10.539 [ 00:38:10.540 { 00:38:10.540 "name": "Nvme0n1", 00:38:10.540 "aliases": [ 00:38:10.540 "2172421f-5ae6-4ea8-8b3b-78928741ff30" 00:38:10.540 ], 00:38:10.540 "product_name": "NVMe disk", 00:38:10.540 "block_size": 512, 00:38:10.540 "num_blocks": 3907029168, 00:38:10.540 "uuid": "2172421f-5ae6-4ea8-8b3b-78928741ff30", 00:38:10.540 "assigned_rate_limits": { 00:38:10.540 "rw_ios_per_sec": 0, 00:38:10.540 "rw_mbytes_per_sec": 0, 00:38:10.540 "r_mbytes_per_sec": 0, 00:38:10.540 "w_mbytes_per_sec": 0 00:38:10.540 }, 00:38:10.540 "claimed": false, 00:38:10.540 "zoned": false, 00:38:10.540 "supported_io_types": { 00:38:10.540 "read": true, 00:38:10.540 "write": true, 00:38:10.540 "unmap": true, 00:38:10.540 "flush": true, 00:38:10.540 "reset": true, 00:38:10.540 "nvme_admin": true, 00:38:10.540 "nvme_io": true, 00:38:10.540 "nvme_io_md": false, 00:38:10.540 "write_zeroes": true, 00:38:10.540 "zcopy": false, 00:38:10.540 "get_zone_info": false, 00:38:10.540 "zone_management": false, 00:38:10.540 "zone_append": false, 00:38:10.540 "compare": false, 00:38:10.540 "compare_and_write": false, 00:38:10.540 "abort": true, 00:38:10.540 "seek_hole": false, 00:38:10.540 "seek_data": false, 00:38:10.540 "copy": false, 00:38:10.540 "nvme_iov_md": false 00:38:10.540 }, 00:38:10.540 "driver_specific": { 00:38:10.540 "nvme": [ 00:38:10.540 { 00:38:10.540 "pci_address": "0000:d8:00.0", 00:38:10.540 "trid": { 00:38:10.540 "trtype": "PCIe", 00:38:10.540 "traddr": "0000:d8:00.0" 00:38:10.540 }, 00:38:10.540 "ctrlr_data": { 00:38:10.540 "cntlid": 0, 00:38:10.540 "vendor_id": "0x8086", 00:38:10.540 "model_number": "INTEL SSDPE2KX020T8", 00:38:10.540 "serial_number": "BTLJ125505KA2P0BGN", 00:38:10.540 "firmware_revision": "VDV10170", 00:38:10.540 "oacs": { 00:38:10.540 "security": 0, 00:38:10.540 "format": 1, 00:38:10.540 "firmware": 1, 00:38:10.540 "ns_manage": 1 00:38:10.540 }, 00:38:10.540 "multi_ctrlr": false, 00:38:10.540 "ana_reporting": false 00:38:10.540 }, 00:38:10.540 "vs": { 00:38:10.540 "nvme_version": "1.2" 00:38:10.540 }, 00:38:10.540 "ns_data": { 00:38:10.540 "id": 1, 00:38:10.540 "can_share": false 00:38:10.540 } 00:38:10.540 } 00:38:10.540 ], 00:38:10.540 "mp_policy": "active_passive" 00:38:10.540 } 00:38:10.540 } 00:38:10.540 ] 00:38:10.540 04:33:19 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:38:10.540 04:33:19 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:38:11.912 1e0672a9-f8f8-408b-b63c-ca30bc6a5d63 00:38:11.912 04:33:20 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:38:12.169 40033336-19af-49a4-94a5-98a4bc999b38 00:38:12.169 04:33:20 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:38:12.169 04:33:20 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:38:12.169 04:33:20 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:38:12.169 04:33:20 compress_isal -- common/autotest_common.sh@899 -- # local i 00:38:12.169 04:33:20 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:38:12.169 04:33:20 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:38:12.169 04:33:20 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:12.427 04:33:20 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:38:12.427 [ 00:38:12.427 { 00:38:12.427 "name": "40033336-19af-49a4-94a5-98a4bc999b38", 00:38:12.427 "aliases": [ 00:38:12.427 "lvs0/lv0" 00:38:12.427 ], 00:38:12.427 "product_name": "Logical Volume", 00:38:12.427 "block_size": 512, 00:38:12.427 "num_blocks": 204800, 00:38:12.427 "uuid": "40033336-19af-49a4-94a5-98a4bc999b38", 00:38:12.427 "assigned_rate_limits": { 00:38:12.427 "rw_ios_per_sec": 0, 00:38:12.427 "rw_mbytes_per_sec": 0, 00:38:12.427 "r_mbytes_per_sec": 0, 00:38:12.427 "w_mbytes_per_sec": 0 00:38:12.427 }, 00:38:12.427 "claimed": false, 00:38:12.427 "zoned": false, 00:38:12.427 "supported_io_types": { 00:38:12.427 "read": true, 00:38:12.427 "write": true, 00:38:12.427 "unmap": true, 00:38:12.427 "flush": false, 00:38:12.427 "reset": true, 00:38:12.427 "nvme_admin": false, 00:38:12.427 "nvme_io": false, 00:38:12.427 "nvme_io_md": false, 00:38:12.427 "write_zeroes": true, 00:38:12.427 "zcopy": false, 00:38:12.427 "get_zone_info": false, 00:38:12.427 "zone_management": false, 00:38:12.427 "zone_append": false, 00:38:12.427 "compare": false, 00:38:12.427 "compare_and_write": false, 00:38:12.427 "abort": false, 00:38:12.427 "seek_hole": true, 00:38:12.427 "seek_data": true, 00:38:12.427 "copy": false, 00:38:12.427 "nvme_iov_md": false 00:38:12.427 }, 00:38:12.427 "driver_specific": { 00:38:12.427 "lvol": { 00:38:12.427 "lvol_store_uuid": "1e0672a9-f8f8-408b-b63c-ca30bc6a5d63", 00:38:12.427 "base_bdev": "Nvme0n1", 00:38:12.427 "thin_provision": true, 00:38:12.427 "num_allocated_clusters": 0, 00:38:12.427 "snapshot": false, 00:38:12.427 "clone": false, 00:38:12.427 "esnap_clone": false 00:38:12.427 } 00:38:12.427 } 00:38:12.427 } 00:38:12.427 ] 00:38:12.686 04:33:21 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:38:12.686 04:33:21 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:38:12.686 04:33:21 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:38:12.686 [2024-07-23 04:33:21.433524] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:38:12.686 COMP_lvs0/lv0 00:38:12.686 04:33:21 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:38:12.686 04:33:21 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:38:12.686 04:33:21 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:38:12.686 04:33:21 compress_isal -- common/autotest_common.sh@899 -- # local i 00:38:12.686 04:33:21 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:38:12.686 04:33:21 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:38:12.686 04:33:21 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:12.944 04:33:21 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:38:13.203 [ 00:38:13.203 { 00:38:13.203 "name": "COMP_lvs0/lv0", 00:38:13.203 "aliases": [ 00:38:13.203 "e3aaf6c3-94fd-5c6d-b318-c8000c241f36" 00:38:13.203 ], 00:38:13.203 "product_name": "compress", 00:38:13.203 "block_size": 512, 00:38:13.203 "num_blocks": 200704, 00:38:13.203 "uuid": "e3aaf6c3-94fd-5c6d-b318-c8000c241f36", 00:38:13.203 "assigned_rate_limits": { 00:38:13.203 "rw_ios_per_sec": 0, 00:38:13.203 "rw_mbytes_per_sec": 0, 00:38:13.203 "r_mbytes_per_sec": 0, 00:38:13.203 "w_mbytes_per_sec": 0 00:38:13.203 }, 00:38:13.203 "claimed": false, 00:38:13.203 "zoned": false, 00:38:13.203 "supported_io_types": { 00:38:13.203 "read": true, 00:38:13.203 "write": true, 00:38:13.203 "unmap": false, 00:38:13.203 "flush": false, 00:38:13.203 "reset": false, 00:38:13.203 "nvme_admin": false, 00:38:13.203 "nvme_io": false, 00:38:13.203 "nvme_io_md": false, 00:38:13.203 "write_zeroes": true, 00:38:13.203 "zcopy": false, 00:38:13.203 "get_zone_info": false, 00:38:13.203 "zone_management": false, 00:38:13.203 "zone_append": false, 00:38:13.203 "compare": false, 00:38:13.203 "compare_and_write": false, 00:38:13.203 "abort": false, 00:38:13.203 "seek_hole": false, 00:38:13.203 "seek_data": false, 00:38:13.203 "copy": false, 00:38:13.203 "nvme_iov_md": false 00:38:13.203 }, 00:38:13.203 "driver_specific": { 00:38:13.203 "compress": { 00:38:13.203 "name": "COMP_lvs0/lv0", 00:38:13.203 "base_bdev_name": "40033336-19af-49a4-94a5-98a4bc999b38", 00:38:13.203 "pm_path": "/tmp/pmem/81c1ebd2-192a-4509-85af-4e5e80c04f4d" 00:38:13.203 } 00:38:13.203 } 00:38:13.203 } 00:38:13.203 ] 00:38:13.203 04:33:21 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:38:13.203 04:33:21 compress_isal -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:38:13.493 I/O targets: 00:38:13.493 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:38:13.493 00:38:13.493 00:38:13.493 CUnit - A unit testing framework for C - Version 2.1-3 00:38:13.493 http://cunit.sourceforge.net/ 00:38:13.493 00:38:13.493 00:38:13.493 Suite: bdevio tests on: COMP_lvs0/lv0 00:38:13.493 Test: blockdev write read block ...passed 00:38:13.493 Test: blockdev write zeroes read block ...passed 00:38:13.493 Test: blockdev write zeroes read no split ...passed 00:38:13.493 Test: blockdev write zeroes read split ...passed 00:38:13.493 Test: blockdev write zeroes read split partial ...passed 00:38:13.493 Test: blockdev reset ...[2024-07-23 04:33:22.142605] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:38:13.493 passed 00:38:13.493 Test: blockdev write read 8 blocks ...passed 00:38:13.493 Test: blockdev write read size > 128k ...passed 00:38:13.493 Test: blockdev write read invalid size ...passed 00:38:13.493 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:38:13.493 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:38:13.493 Test: blockdev write read max offset ...passed 00:38:13.493 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:38:13.493 Test: blockdev writev readv 8 blocks ...passed 00:38:13.493 Test: blockdev writev readv 30 x 1block ...passed 00:38:13.493 Test: blockdev writev readv block ...passed 00:38:13.493 Test: blockdev writev readv size > 128k ...passed 00:38:13.493 Test: blockdev writev readv size > 128k in two iovs ...passed 00:38:13.493 Test: blockdev comparev and writev ...passed 00:38:13.493 Test: blockdev nvme passthru rw ...passed 00:38:13.493 Test: blockdev nvme passthru vendor specific ...passed 00:38:13.493 Test: blockdev nvme admin passthru ...passed 00:38:13.493 Test: blockdev copy ...passed 00:38:13.493 00:38:13.493 Run Summary: Type Total Ran Passed Failed Inactive 00:38:13.493 suites 1 1 n/a 0 0 00:38:13.493 tests 23 23 23 0 0 00:38:13.493 asserts 130 130 130 0 n/a 00:38:13.493 00:38:13.493 Elapsed time = 0.420 seconds 00:38:13.493 0 00:38:13.493 04:33:22 compress_isal -- compress/compress.sh@60 -- # destroy_vols 00:38:13.493 04:33:22 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:38:13.774 04:33:22 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:38:14.033 04:33:22 compress_isal -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:38:14.033 04:33:22 compress_isal -- compress/compress.sh@62 -- # killprocess 2876306 00:38:14.033 04:33:22 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2876306 ']' 00:38:14.033 04:33:22 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2876306 00:38:14.033 04:33:22 compress_isal -- common/autotest_common.sh@953 -- # uname 00:38:14.033 04:33:22 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:14.033 04:33:22 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2876306 00:38:14.033 04:33:22 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:14.033 04:33:22 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:14.033 04:33:22 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2876306' 00:38:14.033 killing process with pid 2876306 00:38:14.033 04:33:22 compress_isal -- common/autotest_common.sh@967 -- # kill 2876306 00:38:14.033 04:33:22 compress_isal -- common/autotest_common.sh@972 -- # wait 2876306 00:38:18.223 04:33:26 compress_isal -- compress/compress.sh@91 -- # '[' 1 -eq 1 ']' 00:38:18.223 04:33:26 compress_isal -- compress/compress.sh@92 -- # run_bdevperf 64 16384 30 00:38:18.223 04:33:26 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:38:18.223 04:33:26 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=2878322 00:38:18.223 04:33:26 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:18.223 04:33:26 compress_isal -- compress/compress.sh@73 -- # waitforlisten 2878322 00:38:18.223 04:33:26 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 64 -o 16384 -w verify -t 30 -C -m 0x6 00:38:18.223 04:33:26 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2878322 ']' 00:38:18.223 04:33:26 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:18.223 04:33:26 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:18.223 04:33:26 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:18.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:18.223 04:33:26 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:18.223 04:33:26 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:38:18.223 [2024-07-23 04:33:26.872946] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:38:18.223 [2024-07-23 04:33:26.873070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2878322 ] 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3d:01.0 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3d:01.1 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3d:01.2 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3d:01.3 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3d:01.4 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3d:01.5 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3d:01.6 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3d:01.7 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3d:02.0 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3d:02.1 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3d:02.2 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3d:02.3 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3d:02.4 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3d:02.5 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3d:02.6 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3d:02.7 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3f:01.0 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3f:01.1 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3f:01.2 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3f:01.3 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3f:01.4 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3f:01.5 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3f:01.6 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3f:01.7 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3f:02.0 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3f:02.1 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3f:02.2 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3f:02.3 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3f:02.4 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3f:02.5 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3f:02.6 cannot be used 00:38:18.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:18.223 EAL: Requested device 0000:3f:02.7 cannot be used 00:38:18.482 [2024-07-23 04:33:27.086859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:18.741 [2024-07-23 04:33:27.357360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:18.741 [2024-07-23 04:33:27.357361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:19.309 04:33:27 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:19.309 04:33:27 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:38:19.309 04:33:27 compress_isal -- compress/compress.sh@74 -- # create_vols 00:38:19.309 04:33:27 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:38:19.309 04:33:27 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:38:22.598 04:33:31 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:38:22.598 04:33:31 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:38:22.598 04:33:31 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:38:22.598 04:33:31 compress_isal -- common/autotest_common.sh@899 -- # local i 00:38:22.598 04:33:31 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:38:22.598 04:33:31 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:38:22.598 04:33:31 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:22.598 04:33:31 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:38:22.857 [ 00:38:22.857 { 00:38:22.857 "name": "Nvme0n1", 00:38:22.857 "aliases": [ 00:38:22.857 "99237711-582a-4899-9e32-9bfeb95bff17" 00:38:22.857 ], 00:38:22.857 "product_name": "NVMe disk", 00:38:22.857 "block_size": 512, 00:38:22.857 "num_blocks": 3907029168, 00:38:22.857 "uuid": "99237711-582a-4899-9e32-9bfeb95bff17", 00:38:22.857 "assigned_rate_limits": { 00:38:22.857 "rw_ios_per_sec": 0, 00:38:22.857 "rw_mbytes_per_sec": 0, 00:38:22.857 "r_mbytes_per_sec": 0, 00:38:22.857 "w_mbytes_per_sec": 0 00:38:22.857 }, 00:38:22.857 "claimed": false, 00:38:22.857 "zoned": false, 00:38:22.857 "supported_io_types": { 00:38:22.857 "read": true, 00:38:22.857 "write": true, 00:38:22.857 "unmap": true, 00:38:22.857 "flush": true, 00:38:22.857 "reset": true, 00:38:22.857 "nvme_admin": true, 00:38:22.857 "nvme_io": true, 00:38:22.857 "nvme_io_md": false, 00:38:22.857 "write_zeroes": true, 00:38:22.857 "zcopy": false, 00:38:22.857 "get_zone_info": false, 00:38:22.857 "zone_management": false, 00:38:22.857 "zone_append": false, 00:38:22.857 "compare": false, 00:38:22.857 "compare_and_write": false, 00:38:22.857 "abort": true, 00:38:22.857 "seek_hole": false, 00:38:22.857 "seek_data": false, 00:38:22.857 "copy": false, 00:38:22.857 "nvme_iov_md": false 00:38:22.857 }, 00:38:22.857 "driver_specific": { 00:38:22.857 "nvme": [ 00:38:22.857 { 00:38:22.857 "pci_address": "0000:d8:00.0", 00:38:22.857 "trid": { 00:38:22.857 "trtype": "PCIe", 00:38:22.857 "traddr": "0000:d8:00.0" 00:38:22.857 }, 00:38:22.857 "ctrlr_data": { 00:38:22.857 "cntlid": 0, 00:38:22.857 "vendor_id": "0x8086", 00:38:22.857 "model_number": "INTEL SSDPE2KX020T8", 00:38:22.857 "serial_number": "BTLJ125505KA2P0BGN", 00:38:22.857 "firmware_revision": "VDV10170", 00:38:22.857 "oacs": { 00:38:22.857 "security": 0, 00:38:22.857 "format": 1, 00:38:22.857 "firmware": 1, 00:38:22.857 "ns_manage": 1 00:38:22.857 }, 00:38:22.857 "multi_ctrlr": false, 00:38:22.857 "ana_reporting": false 00:38:22.857 }, 00:38:22.857 "vs": { 00:38:22.857 "nvme_version": "1.2" 00:38:22.857 }, 00:38:22.857 "ns_data": { 00:38:22.857 "id": 1, 00:38:22.857 "can_share": false 00:38:22.857 } 00:38:22.857 } 00:38:22.857 ], 00:38:22.857 "mp_policy": "active_passive" 00:38:22.857 } 00:38:22.857 } 00:38:22.857 ] 00:38:22.857 04:33:31 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:38:22.857 04:33:31 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:38:24.235 13e65bb4-3128-441b-9587-c344f760d0fb 00:38:24.235 04:33:32 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:38:24.235 7eb46cd1-fb35-453e-9e7f-32e589f20963 00:38:24.235 04:33:32 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:38:24.235 04:33:32 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:38:24.235 04:33:32 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:38:24.235 04:33:32 compress_isal -- common/autotest_common.sh@899 -- # local i 00:38:24.235 04:33:32 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:38:24.235 04:33:32 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:38:24.235 04:33:32 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:24.493 04:33:33 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:38:24.752 [ 00:38:24.752 { 00:38:24.752 "name": "7eb46cd1-fb35-453e-9e7f-32e589f20963", 00:38:24.752 "aliases": [ 00:38:24.752 "lvs0/lv0" 00:38:24.752 ], 00:38:24.752 "product_name": "Logical Volume", 00:38:24.752 "block_size": 512, 00:38:24.752 "num_blocks": 204800, 00:38:24.752 "uuid": "7eb46cd1-fb35-453e-9e7f-32e589f20963", 00:38:24.752 "assigned_rate_limits": { 00:38:24.752 "rw_ios_per_sec": 0, 00:38:24.752 "rw_mbytes_per_sec": 0, 00:38:24.752 "r_mbytes_per_sec": 0, 00:38:24.752 "w_mbytes_per_sec": 0 00:38:24.752 }, 00:38:24.752 "claimed": false, 00:38:24.752 "zoned": false, 00:38:24.752 "supported_io_types": { 00:38:24.752 "read": true, 00:38:24.752 "write": true, 00:38:24.752 "unmap": true, 00:38:24.752 "flush": false, 00:38:24.752 "reset": true, 00:38:24.752 "nvme_admin": false, 00:38:24.752 "nvme_io": false, 00:38:24.752 "nvme_io_md": false, 00:38:24.752 "write_zeroes": true, 00:38:24.752 "zcopy": false, 00:38:24.752 "get_zone_info": false, 00:38:24.752 "zone_management": false, 00:38:24.752 "zone_append": false, 00:38:24.752 "compare": false, 00:38:24.752 "compare_and_write": false, 00:38:24.752 "abort": false, 00:38:24.752 "seek_hole": true, 00:38:24.752 "seek_data": true, 00:38:24.752 "copy": false, 00:38:24.752 "nvme_iov_md": false 00:38:24.752 }, 00:38:24.752 "driver_specific": { 00:38:24.752 "lvol": { 00:38:24.752 "lvol_store_uuid": "13e65bb4-3128-441b-9587-c344f760d0fb", 00:38:24.752 "base_bdev": "Nvme0n1", 00:38:24.752 "thin_provision": true, 00:38:24.752 "num_allocated_clusters": 0, 00:38:24.752 "snapshot": false, 00:38:24.752 "clone": false, 00:38:24.752 "esnap_clone": false 00:38:24.752 } 00:38:24.752 } 00:38:24.752 } 00:38:24.752 ] 00:38:24.752 04:33:33 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:38:24.752 04:33:33 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:38:24.752 04:33:33 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:38:25.011 [2024-07-23 04:33:33.667271] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:38:25.011 COMP_lvs0/lv0 00:38:25.011 04:33:33 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:38:25.011 04:33:33 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:38:25.011 04:33:33 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:38:25.011 04:33:33 compress_isal -- common/autotest_common.sh@899 -- # local i 00:38:25.011 04:33:33 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:38:25.011 04:33:33 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:38:25.011 04:33:33 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:25.270 04:33:33 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:38:25.529 [ 00:38:25.529 { 00:38:25.529 "name": "COMP_lvs0/lv0", 00:38:25.529 "aliases": [ 00:38:25.529 "f6eedb2e-6ffc-550d-a8fc-02dae41f033a" 00:38:25.529 ], 00:38:25.529 "product_name": "compress", 00:38:25.529 "block_size": 512, 00:38:25.529 "num_blocks": 200704, 00:38:25.529 "uuid": "f6eedb2e-6ffc-550d-a8fc-02dae41f033a", 00:38:25.529 "assigned_rate_limits": { 00:38:25.529 "rw_ios_per_sec": 0, 00:38:25.529 "rw_mbytes_per_sec": 0, 00:38:25.529 "r_mbytes_per_sec": 0, 00:38:25.529 "w_mbytes_per_sec": 0 00:38:25.529 }, 00:38:25.529 "claimed": false, 00:38:25.529 "zoned": false, 00:38:25.529 "supported_io_types": { 00:38:25.529 "read": true, 00:38:25.529 "write": true, 00:38:25.529 "unmap": false, 00:38:25.529 "flush": false, 00:38:25.529 "reset": false, 00:38:25.529 "nvme_admin": false, 00:38:25.529 "nvme_io": false, 00:38:25.529 "nvme_io_md": false, 00:38:25.529 "write_zeroes": true, 00:38:25.529 "zcopy": false, 00:38:25.529 "get_zone_info": false, 00:38:25.529 "zone_management": false, 00:38:25.529 "zone_append": false, 00:38:25.529 "compare": false, 00:38:25.529 "compare_and_write": false, 00:38:25.529 "abort": false, 00:38:25.529 "seek_hole": false, 00:38:25.529 "seek_data": false, 00:38:25.529 "copy": false, 00:38:25.529 "nvme_iov_md": false 00:38:25.529 }, 00:38:25.529 "driver_specific": { 00:38:25.529 "compress": { 00:38:25.529 "name": "COMP_lvs0/lv0", 00:38:25.529 "base_bdev_name": "7eb46cd1-fb35-453e-9e7f-32e589f20963", 00:38:25.529 "pm_path": "/tmp/pmem/3cf94086-00dd-408a-9844-dd7bbb74ecbe" 00:38:25.529 } 00:38:25.529 } 00:38:25.529 } 00:38:25.529 ] 00:38:25.529 04:33:34 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:38:25.529 04:33:34 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:38:25.529 Running I/O for 30 seconds... 00:38:57.613 00:38:57.613 Latency(us) 00:38:57.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:57.613 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 64, IO size: 16384) 00:38:57.613 Verification LBA range: start 0x0 length 0xc40 00:38:57.613 COMP_lvs0/lv0 : 30.01 1402.12 21.91 0.00 0.00 45420.04 838.86 39636.17 00:38:57.613 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 64, IO size: 16384) 00:38:57.613 Verification LBA range: start 0xc40 length 0xc40 00:38:57.613 COMP_lvs0/lv0 : 30.01 4628.17 72.32 0.00 0.00 13718.22 606.21 27262.98 00:38:57.613 =================================================================================================================== 00:38:57.613 Total : 6030.29 94.22 0.00 0.00 21089.95 606.21 39636.17 00:38:57.613 0 00:38:57.613 04:34:04 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:38:57.613 04:34:04 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:38:57.613 04:34:04 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:38:57.613 04:34:04 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:38:57.613 04:34:04 compress_isal -- compress/compress.sh@78 -- # killprocess 2878322 00:38:57.613 04:34:04 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2878322 ']' 00:38:57.613 04:34:04 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2878322 00:38:57.613 04:34:04 compress_isal -- common/autotest_common.sh@953 -- # uname 00:38:57.613 04:34:04 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:57.613 04:34:04 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2878322 00:38:57.613 04:34:04 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:57.613 04:34:04 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:57.613 04:34:04 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2878322' 00:38:57.613 killing process with pid 2878322 00:38:57.613 04:34:04 compress_isal -- common/autotest_common.sh@967 -- # kill 2878322 00:38:57.613 Received shutdown signal, test time was about 30.000000 seconds 00:38:57.613 00:38:57.613 Latency(us) 00:38:57.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:57.613 =================================================================================================================== 00:38:57.613 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:57.613 04:34:04 compress_isal -- common/autotest_common.sh@972 -- # wait 2878322 00:39:00.158 04:34:08 compress_isal -- compress/compress.sh@95 -- # export TEST_TRANSPORT=tcp 00:39:00.158 04:34:08 compress_isal -- compress/compress.sh@95 -- # TEST_TRANSPORT=tcp 00:39:00.158 04:34:08 compress_isal -- compress/compress.sh@96 -- # NET_TYPE=virt 00:39:00.159 04:34:08 compress_isal -- compress/compress.sh@96 -- # nvmftestinit 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:00.159 04:34:08 compress_isal -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:00.159 04:34:08 compress_isal -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@432 -- # nvmf_veth_init 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:00.159 04:34:08 compress_isal -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:39:00.419 04:34:08 compress_isal -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:39:00.419 Cannot find device "nvmf_tgt_br" 00:39:00.419 04:34:08 compress_isal -- nvmf/common.sh@155 -- # true 00:39:00.419 04:34:08 compress_isal -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:39:00.419 Cannot find device "nvmf_tgt_br2" 00:39:00.419 04:34:08 compress_isal -- nvmf/common.sh@156 -- # true 00:39:00.419 04:34:08 compress_isal -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:39:00.419 04:34:08 compress_isal -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:39:00.419 Cannot find device "nvmf_tgt_br" 00:39:00.419 04:34:09 compress_isal -- nvmf/common.sh@158 -- # true 00:39:00.419 04:34:09 compress_isal -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:39:00.419 Cannot find device "nvmf_tgt_br2" 00:39:00.419 04:34:09 compress_isal -- nvmf/common.sh@159 -- # true 00:39:00.419 04:34:09 compress_isal -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:39:00.419 04:34:09 compress_isal -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:39:00.419 04:34:09 compress_isal -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:00.419 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:00.419 04:34:09 compress_isal -- nvmf/common.sh@162 -- # true 00:39:00.419 04:34:09 compress_isal -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:00.419 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:00.419 04:34:09 compress_isal -- nvmf/common.sh@163 -- # true 00:39:00.419 04:34:09 compress_isal -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:39:00.419 04:34:09 compress_isal -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:00.419 04:34:09 compress_isal -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:00.419 04:34:09 compress_isal -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:00.419 04:34:09 compress_isal -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:00.419 04:34:09 compress_isal -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:00.419 04:34:09 compress_isal -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:00.419 04:34:09 compress_isal -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:39:00.419 04:34:09 compress_isal -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:39:00.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:00.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:39:00.678 00:39:00.678 --- 10.0.0.2 ping statistics --- 00:39:00.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:00.678 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:39:00.678 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:00.678 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:39:00.678 00:39:00.678 --- 10.0.0.3 ping statistics --- 00:39:00.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:00.678 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:00.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:00.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:39:00.678 00:39:00.678 --- 10.0.0.1 ping statistics --- 00:39:00.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:00.678 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@433 -- # return 0 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:00.678 04:34:09 compress_isal -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:00.936 04:34:09 compress_isal -- compress/compress.sh@97 -- # nvmfappstart -m 0x7 00:39:00.936 04:34:09 compress_isal -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:00.936 04:34:09 compress_isal -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:00.936 04:34:09 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:39:00.936 04:34:09 compress_isal -- nvmf/common.sh@481 -- # nvmfpid=2885520 00:39:00.936 04:34:09 compress_isal -- nvmf/common.sh@482 -- # waitforlisten 2885520 00:39:00.936 04:34:09 compress_isal -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:39:00.936 04:34:09 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2885520 ']' 00:39:00.936 04:34:09 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:00.936 04:34:09 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:00.936 04:34:09 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:00.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:00.936 04:34:09 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:00.936 04:34:09 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:39:00.936 [2024-07-23 04:34:09.615542] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:39:00.936 [2024-07-23 04:34:09.615661] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3d:01.0 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3d:01.1 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3d:01.2 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3d:01.3 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3d:01.4 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3d:01.5 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3d:01.6 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3d:01.7 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3d:02.0 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3d:02.1 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3d:02.2 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3d:02.3 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3d:02.4 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3d:02.5 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3d:02.6 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3d:02.7 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3f:01.0 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3f:01.1 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3f:01.2 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3f:01.3 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3f:01.4 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3f:01.5 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3f:01.6 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3f:01.7 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3f:02.0 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3f:02.1 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3f:02.2 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3f:02.3 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3f:02.4 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3f:02.5 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3f:02.6 cannot be used 00:39:01.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:01.195 EAL: Requested device 0000:3f:02.7 cannot be used 00:39:01.195 [2024-07-23 04:34:09.853120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:01.454 [2024-07-23 04:34:10.149125] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:01.454 [2024-07-23 04:34:10.149196] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:01.454 [2024-07-23 04:34:10.149217] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:01.454 [2024-07-23 04:34:10.149233] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:01.454 [2024-07-23 04:34:10.149250] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:01.454 [2024-07-23 04:34:10.149343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:01.454 [2024-07-23 04:34:10.149409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:01.454 [2024-07-23 04:34:10.149419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:02.023 04:34:10 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:02.023 04:34:10 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:39:02.023 04:34:10 compress_isal -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:02.023 04:34:10 compress_isal -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:02.023 04:34:10 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:39:02.023 04:34:10 compress_isal -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:02.023 04:34:10 compress_isal -- compress/compress.sh@98 -- # trap 'nvmftestfini; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:39:02.023 04:34:10 compress_isal -- compress/compress.sh@101 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -u 8192 00:39:02.282 [2024-07-23 04:34:10.942386] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:02.283 04:34:10 compress_isal -- compress/compress.sh@102 -- # create_vols 00:39:02.283 04:34:10 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:39:02.283 04:34:10 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:39:05.572 04:34:14 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:39:05.572 04:34:14 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:39:05.572 04:34:14 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:39:05.572 04:34:14 compress_isal -- common/autotest_common.sh@899 -- # local i 00:39:05.572 04:34:14 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:39:05.572 04:34:14 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:39:05.572 04:34:14 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:05.831 04:34:14 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:39:05.831 [ 00:39:05.831 { 00:39:05.831 "name": "Nvme0n1", 00:39:05.831 "aliases": [ 00:39:05.831 "26c8c2ac-ab43-45a8-aa9b-31a4a217bae0" 00:39:05.831 ], 00:39:05.831 "product_name": "NVMe disk", 00:39:05.831 "block_size": 512, 00:39:05.831 "num_blocks": 3907029168, 00:39:05.831 "uuid": "26c8c2ac-ab43-45a8-aa9b-31a4a217bae0", 00:39:05.831 "assigned_rate_limits": { 00:39:05.831 "rw_ios_per_sec": 0, 00:39:05.831 "rw_mbytes_per_sec": 0, 00:39:05.831 "r_mbytes_per_sec": 0, 00:39:05.831 "w_mbytes_per_sec": 0 00:39:05.831 }, 00:39:05.831 "claimed": false, 00:39:05.831 "zoned": false, 00:39:05.831 "supported_io_types": { 00:39:05.831 "read": true, 00:39:05.831 "write": true, 00:39:05.831 "unmap": true, 00:39:05.831 "flush": true, 00:39:05.831 "reset": true, 00:39:05.831 "nvme_admin": true, 00:39:05.831 "nvme_io": true, 00:39:05.831 "nvme_io_md": false, 00:39:05.831 "write_zeroes": true, 00:39:05.831 "zcopy": false, 00:39:05.831 "get_zone_info": false, 00:39:05.831 "zone_management": false, 00:39:05.831 "zone_append": false, 00:39:05.831 "compare": false, 00:39:05.831 "compare_and_write": false, 00:39:05.831 "abort": true, 00:39:05.831 "seek_hole": false, 00:39:05.831 "seek_data": false, 00:39:05.831 "copy": false, 00:39:05.831 "nvme_iov_md": false 00:39:05.831 }, 00:39:05.831 "driver_specific": { 00:39:05.831 "nvme": [ 00:39:05.831 { 00:39:05.831 "pci_address": "0000:d8:00.0", 00:39:05.831 "trid": { 00:39:05.831 "trtype": "PCIe", 00:39:05.831 "traddr": "0000:d8:00.0" 00:39:05.831 }, 00:39:05.831 "ctrlr_data": { 00:39:05.831 "cntlid": 0, 00:39:05.831 "vendor_id": "0x8086", 00:39:05.831 "model_number": "INTEL SSDPE2KX020T8", 00:39:05.831 "serial_number": "BTLJ125505KA2P0BGN", 00:39:05.831 "firmware_revision": "VDV10170", 00:39:05.831 "oacs": { 00:39:05.831 "security": 0, 00:39:05.831 "format": 1, 00:39:05.831 "firmware": 1, 00:39:05.831 "ns_manage": 1 00:39:05.831 }, 00:39:05.831 "multi_ctrlr": false, 00:39:05.831 "ana_reporting": false 00:39:05.831 }, 00:39:05.831 "vs": { 00:39:05.831 "nvme_version": "1.2" 00:39:05.831 }, 00:39:05.831 "ns_data": { 00:39:05.831 "id": 1, 00:39:05.831 "can_share": false 00:39:05.831 } 00:39:05.831 } 00:39:05.831 ], 00:39:05.831 "mp_policy": "active_passive" 00:39:05.831 } 00:39:05.831 } 00:39:05.831 ] 00:39:06.091 04:34:14 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:39:06.091 04:34:14 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:39:07.469 f6d56d16-faf2-4b06-906a-b1beedb565e6 00:39:07.469 04:34:15 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:39:07.469 ba7a9ae1-84c4-4646-b8d8-0f05d717b202 00:39:07.469 04:34:16 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:39:07.469 04:34:16 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:39:07.469 04:34:16 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:39:07.469 04:34:16 compress_isal -- common/autotest_common.sh@899 -- # local i 00:39:07.469 04:34:16 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:39:07.469 04:34:16 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:39:07.469 04:34:16 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:07.728 04:34:16 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:39:07.987 [ 00:39:07.987 { 00:39:07.987 "name": "ba7a9ae1-84c4-4646-b8d8-0f05d717b202", 00:39:07.987 "aliases": [ 00:39:07.987 "lvs0/lv0" 00:39:07.987 ], 00:39:07.987 "product_name": "Logical Volume", 00:39:07.987 "block_size": 512, 00:39:07.987 "num_blocks": 204800, 00:39:07.987 "uuid": "ba7a9ae1-84c4-4646-b8d8-0f05d717b202", 00:39:07.987 "assigned_rate_limits": { 00:39:07.987 "rw_ios_per_sec": 0, 00:39:07.987 "rw_mbytes_per_sec": 0, 00:39:07.987 "r_mbytes_per_sec": 0, 00:39:07.987 "w_mbytes_per_sec": 0 00:39:07.987 }, 00:39:07.987 "claimed": false, 00:39:07.987 "zoned": false, 00:39:07.987 "supported_io_types": { 00:39:07.987 "read": true, 00:39:07.987 "write": true, 00:39:07.987 "unmap": true, 00:39:07.987 "flush": false, 00:39:07.987 "reset": true, 00:39:07.987 "nvme_admin": false, 00:39:07.987 "nvme_io": false, 00:39:07.987 "nvme_io_md": false, 00:39:07.987 "write_zeroes": true, 00:39:07.987 "zcopy": false, 00:39:07.987 "get_zone_info": false, 00:39:07.987 "zone_management": false, 00:39:07.987 "zone_append": false, 00:39:07.987 "compare": false, 00:39:07.987 "compare_and_write": false, 00:39:07.987 "abort": false, 00:39:07.987 "seek_hole": true, 00:39:07.987 "seek_data": true, 00:39:07.987 "copy": false, 00:39:07.987 "nvme_iov_md": false 00:39:07.987 }, 00:39:07.987 "driver_specific": { 00:39:07.987 "lvol": { 00:39:07.987 "lvol_store_uuid": "f6d56d16-faf2-4b06-906a-b1beedb565e6", 00:39:07.987 "base_bdev": "Nvme0n1", 00:39:07.987 "thin_provision": true, 00:39:07.987 "num_allocated_clusters": 0, 00:39:07.987 "snapshot": false, 00:39:07.987 "clone": false, 00:39:07.987 "esnap_clone": false 00:39:07.987 } 00:39:07.987 } 00:39:07.987 } 00:39:07.987 ] 00:39:07.987 04:34:16 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:39:07.987 04:34:16 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:39:07.987 04:34:16 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:39:08.246 [2024-07-23 04:34:16.787355] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:39:08.246 COMP_lvs0/lv0 00:39:08.246 04:34:16 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:39:08.246 04:34:16 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:39:08.246 04:34:16 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:39:08.246 04:34:16 compress_isal -- common/autotest_common.sh@899 -- # local i 00:39:08.247 04:34:16 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:39:08.247 04:34:16 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:39:08.247 04:34:16 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:08.247 04:34:17 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:39:08.505 [ 00:39:08.506 { 00:39:08.506 "name": "COMP_lvs0/lv0", 00:39:08.506 "aliases": [ 00:39:08.506 "400b4127-3ec4-5a08-96cb-ad0f0cf433bc" 00:39:08.506 ], 00:39:08.506 "product_name": "compress", 00:39:08.506 "block_size": 512, 00:39:08.506 "num_blocks": 200704, 00:39:08.506 "uuid": "400b4127-3ec4-5a08-96cb-ad0f0cf433bc", 00:39:08.506 "assigned_rate_limits": { 00:39:08.506 "rw_ios_per_sec": 0, 00:39:08.506 "rw_mbytes_per_sec": 0, 00:39:08.506 "r_mbytes_per_sec": 0, 00:39:08.506 "w_mbytes_per_sec": 0 00:39:08.506 }, 00:39:08.506 "claimed": false, 00:39:08.506 "zoned": false, 00:39:08.506 "supported_io_types": { 00:39:08.506 "read": true, 00:39:08.506 "write": true, 00:39:08.506 "unmap": false, 00:39:08.506 "flush": false, 00:39:08.506 "reset": false, 00:39:08.506 "nvme_admin": false, 00:39:08.506 "nvme_io": false, 00:39:08.506 "nvme_io_md": false, 00:39:08.506 "write_zeroes": true, 00:39:08.506 "zcopy": false, 00:39:08.506 "get_zone_info": false, 00:39:08.506 "zone_management": false, 00:39:08.506 "zone_append": false, 00:39:08.506 "compare": false, 00:39:08.506 "compare_and_write": false, 00:39:08.506 "abort": false, 00:39:08.506 "seek_hole": false, 00:39:08.506 "seek_data": false, 00:39:08.506 "copy": false, 00:39:08.506 "nvme_iov_md": false 00:39:08.506 }, 00:39:08.506 "driver_specific": { 00:39:08.506 "compress": { 00:39:08.506 "name": "COMP_lvs0/lv0", 00:39:08.506 "base_bdev_name": "ba7a9ae1-84c4-4646-b8d8-0f05d717b202", 00:39:08.506 "pm_path": "/tmp/pmem/107367c7-e366-47f5-8264-f84189ce185c" 00:39:08.506 } 00:39:08.506 } 00:39:08.506 } 00:39:08.506 ] 00:39:08.506 04:34:17 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:39:08.506 04:34:17 compress_isal -- compress/compress.sh@103 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:08.765 04:34:17 compress_isal -- compress/compress.sh@104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 COMP_lvs0/lv0 00:39:09.024 04:34:17 compress_isal -- compress/compress.sh@105 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:09.283 [2024-07-23 04:34:17.924385] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:09.283 04:34:17 compress_isal -- compress/compress.sh@109 -- # perf_pid=2886975 00:39:09.283 04:34:17 compress_isal -- compress/compress.sh@112 -- # trap 'killprocess $perf_pid; compress_err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:39:09.283 04:34:17 compress_isal -- compress/compress.sh@108 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 64 -s 512 -w randrw -t 30 -c 0x18 -M 50 00:39:09.283 04:34:17 compress_isal -- compress/compress.sh@113 -- # wait 2886975 00:39:09.542 [2024-07-23 04:34:18.243682] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:41.690 Initializing NVMe Controllers 00:39:41.690 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:41.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:39:41.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:39:41.690 Initialization complete. Launching workers. 00:39:41.690 ======================================================== 00:39:41.690 Latency(us) 00:39:41.690 Device Information : IOPS MiB/s Average min max 00:39:41.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 4519.37 17.65 14163.99 1871.42 39996.97 00:39:41.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 2830.80 11.06 22612.27 3105.47 43252.17 00:39:41.690 ======================================================== 00:39:41.690 Total : 7350.17 28.71 17417.71 1871.42 43252.17 00:39:41.690 00:39:41.690 04:34:48 compress_isal -- compress/compress.sh@114 -- # destroy_vols 00:39:41.690 04:34:48 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:39:41.690 04:34:48 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:39:41.690 04:34:48 compress_isal -- compress/compress.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:39:41.690 04:34:48 compress_isal -- compress/compress.sh@117 -- # nvmftestfini 00:39:41.690 04:34:48 compress_isal -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:41.690 04:34:48 compress_isal -- nvmf/common.sh@117 -- # sync 00:39:41.690 04:34:48 compress_isal -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:41.690 04:34:48 compress_isal -- nvmf/common.sh@120 -- # set +e 00:39:41.690 04:34:48 compress_isal -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:41.690 04:34:48 compress_isal -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:41.690 rmmod nvme_tcp 00:39:41.690 rmmod nvme_fabrics 00:39:41.690 rmmod nvme_keyring 00:39:41.690 04:34:48 compress_isal -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:41.690 04:34:48 compress_isal -- nvmf/common.sh@124 -- # set -e 00:39:41.690 04:34:48 compress_isal -- nvmf/common.sh@125 -- # return 0 00:39:41.690 04:34:48 compress_isal -- nvmf/common.sh@489 -- # '[' -n 2885520 ']' 00:39:41.690 04:34:48 compress_isal -- nvmf/common.sh@490 -- # killprocess 2885520 00:39:41.690 04:34:48 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2885520 ']' 00:39:41.690 04:34:48 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2885520 00:39:41.690 04:34:48 compress_isal -- common/autotest_common.sh@953 -- # uname 00:39:41.690 04:34:48 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:41.690 04:34:48 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2885520 00:39:41.690 04:34:49 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:41.690 04:34:49 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:41.690 04:34:49 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2885520' 00:39:41.690 killing process with pid 2885520 00:39:41.690 04:34:49 compress_isal -- common/autotest_common.sh@967 -- # kill 2885520 00:39:41.690 04:34:49 compress_isal -- common/autotest_common.sh@972 -- # wait 2885520 00:39:44.227 04:34:52 compress_isal -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:44.227 04:34:52 compress_isal -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:44.227 04:34:52 compress_isal -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:44.227 04:34:52 compress_isal -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:44.227 04:34:52 compress_isal -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:44.227 04:34:52 compress_isal -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:44.227 04:34:52 compress_isal -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:44.227 04:34:52 compress_isal -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:44.227 04:34:52 compress_isal -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:39:44.227 04:34:52 compress_isal -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:39:44.227 00:39:44.227 real 2m23.822s 00:39:44.227 user 6m25.767s 00:39:44.227 sys 0m19.958s 00:39:44.227 04:34:52 compress_isal -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:44.227 04:34:52 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:39:44.227 ************************************ 00:39:44.227 END TEST compress_isal 00:39:44.227 ************************************ 00:39:44.227 04:34:52 -- common/autotest_common.sh@1142 -- # return 0 00:39:44.227 04:34:52 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:39:44.227 04:34:52 -- spdk/autotest.sh@356 -- # '[' 1 -eq 1 ']' 00:39:44.227 04:34:52 -- spdk/autotest.sh@357 -- # run_test blockdev_crypto_aesni /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:39:44.227 04:34:52 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:44.227 04:34:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:44.227 04:34:52 -- common/autotest_common.sh@10 -- # set +x 00:39:44.487 ************************************ 00:39:44.487 START TEST blockdev_crypto_aesni 00:39:44.487 ************************************ 00:39:44.487 04:34:53 blockdev_crypto_aesni -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:39:44.487 * Looking for test storage... 00:39:44.487 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/nbd_common.sh@6 -- # set -e 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@20 -- # : 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@673 -- # uname -s 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@681 -- # test_type=crypto_aesni 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@682 -- # crypto_device= 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@683 -- # dek= 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@684 -- # env_ctx= 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@689 -- # [[ crypto_aesni == bdev ]] 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@689 -- # [[ crypto_aesni == crypto_* ]] 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2892554 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:39:44.487 04:34:53 blockdev_crypto_aesni -- bdev/blockdev.sh@49 -- # waitforlisten 2892554 00:39:44.487 04:34:53 blockdev_crypto_aesni -- common/autotest_common.sh@829 -- # '[' -z 2892554 ']' 00:39:44.487 04:34:53 blockdev_crypto_aesni -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:44.487 04:34:53 blockdev_crypto_aesni -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:44.487 04:34:53 blockdev_crypto_aesni -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:44.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:44.487 04:34:53 blockdev_crypto_aesni -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:44.487 04:34:53 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:44.746 [2024-07-23 04:34:53.293317] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:39:44.746 [2024-07-23 04:34:53.293440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2892554 ] 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3d:01.0 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3d:01.1 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3d:01.2 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3d:01.3 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3d:01.4 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3d:01.5 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3d:01.6 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3d:01.7 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3d:02.0 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3d:02.1 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3d:02.2 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3d:02.3 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3d:02.4 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3d:02.5 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3d:02.6 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3d:02.7 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3f:01.0 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3f:01.1 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3f:01.2 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3f:01.3 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3f:01.4 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3f:01.5 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3f:01.6 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3f:01.7 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3f:02.0 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3f:02.1 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3f:02.2 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3f:02.3 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3f:02.4 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3f:02.5 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3f:02.6 cannot be used 00:39:44.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:44.746 EAL: Requested device 0000:3f:02.7 cannot be used 00:39:44.746 [2024-07-23 04:34:53.518560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:45.314 [2024-07-23 04:34:53.803783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:45.573 04:34:54 blockdev_crypto_aesni -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:45.573 04:34:54 blockdev_crypto_aesni -- common/autotest_common.sh@862 -- # return 0 00:39:45.573 04:34:54 blockdev_crypto_aesni -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:39:45.573 04:34:54 blockdev_crypto_aesni -- bdev/blockdev.sh@704 -- # setup_crypto_aesni_conf 00:39:45.573 04:34:54 blockdev_crypto_aesni -- bdev/blockdev.sh@145 -- # rpc_cmd 00:39:45.573 04:34:54 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:45.573 04:34:54 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:45.573 [2024-07-23 04:34:54.133405] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:39:45.573 [2024-07-23 04:34:54.141459] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:39:45.573 [2024-07-23 04:34:54.149479] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:39:45.832 [2024-07-23 04:34:54.507254] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:39:50.023 true 00:39:50.024 true 00:39:50.024 true 00:39:50.024 true 00:39:50.024 Malloc0 00:39:50.024 Malloc1 00:39:50.024 Malloc2 00:39:50.024 Malloc3 00:39:50.024 [2024-07-23 04:34:58.254172] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:39:50.024 crypto_ram 00:39:50.024 [2024-07-23 04:34:58.262292] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:39:50.024 crypto_ram2 00:39:50.024 [2024-07-23 04:34:58.270439] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:39:50.024 crypto_ram3 00:39:50.024 [2024-07-23 04:34:58.278471] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:39:50.024 crypto_ram4 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.024 04:34:58 blockdev_crypto_aesni -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.024 04:34:58 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # cat 00:39:50.024 04:34:58 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.024 04:34:58 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.024 04:34:58 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.024 04:34:58 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:39:50.024 04:34:58 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:39:50.024 04:34:58 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:50.024 04:34:58 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:39:50.024 04:34:58 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # jq -r .name 00:39:50.024 04:34:58 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "bb78b5ce-d493-534b-bcc8-5160a6f4f0aa"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "bb78b5ce-d493-534b-bcc8-5160a6f4f0aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "9dadcafd-dec6-52d1-8244-8c60852f8d04"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9dadcafd-dec6-52d1-8244-8c60852f8d04",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "ede54529-69de-5e5a-beb3-61dab80d7e85"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "ede54529-69de-5e5a-beb3-61dab80d7e85",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "a7ff7627-4840-59eb-a927-004ccdfe5be9"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "a7ff7627-4840-59eb-a927-004ccdfe5be9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:39:50.024 04:34:58 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:39:50.024 04:34:58 blockdev_crypto_aesni -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:39:50.024 04:34:58 blockdev_crypto_aesni -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:39:50.024 04:34:58 blockdev_crypto_aesni -- bdev/blockdev.sh@753 -- # killprocess 2892554 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@948 -- # '[' -z 2892554 ']' 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@952 -- # kill -0 2892554 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@953 -- # uname 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2892554 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2892554' 00:39:50.024 killing process with pid 2892554 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@967 -- # kill 2892554 00:39:50.024 04:34:58 blockdev_crypto_aesni -- common/autotest_common.sh@972 -- # wait 2892554 00:39:54.219 04:35:02 blockdev_crypto_aesni -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:39:54.219 04:35:02 blockdev_crypto_aesni -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:39:54.219 04:35:02 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:39:54.219 04:35:02 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:54.219 04:35:02 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:54.219 ************************************ 00:39:54.219 START TEST bdev_hello_world 00:39:54.219 ************************************ 00:39:54.219 04:35:02 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:39:54.219 [2024-07-23 04:35:02.800082] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:39:54.219 [2024-07-23 04:35:02.800201] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894138 ] 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3d:01.0 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3d:01.1 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3d:01.2 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3d:01.3 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3d:01.4 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3d:01.5 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3d:01.6 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3d:01.7 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3d:02.0 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3d:02.1 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3d:02.2 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3d:02.3 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3d:02.4 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3d:02.5 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3d:02.6 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3d:02.7 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3f:01.0 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3f:01.1 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3f:01.2 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3f:01.3 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3f:01.4 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3f:01.5 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3f:01.6 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3f:01.7 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3f:02.0 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3f:02.1 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3f:02.2 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3f:02.3 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3f:02.4 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3f:02.5 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3f:02.6 cannot be used 00:39:54.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.219 EAL: Requested device 0000:3f:02.7 cannot be used 00:39:54.479 [2024-07-23 04:35:03.023935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:54.737 [2024-07-23 04:35:03.289787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:54.737 [2024-07-23 04:35:03.311555] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:39:54.737 [2024-07-23 04:35:03.319576] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:39:54.737 [2024-07-23 04:35:03.327585] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:39:54.995 [2024-07-23 04:35:03.699672] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:39:58.283 [2024-07-23 04:35:06.585911] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:39:58.283 [2024-07-23 04:35:06.585982] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:39:58.283 [2024-07-23 04:35:06.586004] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:58.283 [2024-07-23 04:35:06.593924] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:39:58.283 [2024-07-23 04:35:06.593962] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:39:58.283 [2024-07-23 04:35:06.593978] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:58.283 [2024-07-23 04:35:06.601957] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:39:58.283 [2024-07-23 04:35:06.601996] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:39:58.283 [2024-07-23 04:35:06.602011] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:58.283 [2024-07-23 04:35:06.609958] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:39:58.283 [2024-07-23 04:35:06.609990] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:39:58.283 [2024-07-23 04:35:06.610004] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:58.283 [2024-07-23 04:35:06.848358] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:39:58.283 [2024-07-23 04:35:06.848407] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:39:58.283 [2024-07-23 04:35:06.848431] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:39:58.283 [2024-07-23 04:35:06.850671] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:39:58.283 [2024-07-23 04:35:06.850778] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:39:58.283 [2024-07-23 04:35:06.850802] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:39:58.283 [2024-07-23 04:35:06.850874] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:39:58.283 00:39:58.283 [2024-07-23 04:35:06.850904] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:40:00.819 00:40:00.819 real 0m6.642s 00:40:00.819 user 0m6.077s 00:40:00.819 sys 0m0.513s 00:40:00.819 04:35:09 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:00.819 04:35:09 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:40:00.819 ************************************ 00:40:00.819 END TEST bdev_hello_world 00:40:00.819 ************************************ 00:40:00.819 04:35:09 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:40:00.819 04:35:09 blockdev_crypto_aesni -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:40:00.819 04:35:09 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:40:00.819 04:35:09 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:00.819 04:35:09 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:40:00.819 ************************************ 00:40:00.819 START TEST bdev_bounds 00:40:00.819 ************************************ 00:40:00.819 04:35:09 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:40:00.819 04:35:09 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=2895215 00:40:00.819 04:35:09 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:40:00.819 04:35:09 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 2895215' 00:40:00.819 Process bdevio pid: 2895215 00:40:00.819 04:35:09 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 2895215 00:40:00.819 04:35:09 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 2895215 ']' 00:40:00.819 04:35:09 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:00.819 04:35:09 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:00.819 04:35:09 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:00.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:00.819 04:35:09 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:00.819 04:35:09 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:40:00.819 04:35:09 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:40:00.819 [2024-07-23 04:35:09.520201] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:40:00.819 [2024-07-23 04:35:09.520319] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2895215 ] 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3d:01.0 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3d:01.1 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3d:01.2 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3d:01.3 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3d:01.4 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3d:01.5 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3d:01.6 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3d:01.7 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3d:02.0 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3d:02.1 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3d:02.2 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3d:02.3 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3d:02.4 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3d:02.5 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3d:02.6 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3d:02.7 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3f:01.0 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3f:01.1 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3f:01.2 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3f:01.3 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3f:01.4 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3f:01.5 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3f:01.6 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3f:01.7 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3f:02.0 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3f:02.1 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3f:02.2 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3f:02.3 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3f:02.4 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3f:02.5 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3f:02.6 cannot be used 00:40:01.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:01.078 EAL: Requested device 0000:3f:02.7 cannot be used 00:40:01.078 [2024-07-23 04:35:09.744267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:01.372 [2024-07-23 04:35:10.038333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:01.372 [2024-07-23 04:35:10.038418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:01.372 [2024-07-23 04:35:10.038418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:40:01.372 [2024-07-23 04:35:10.060216] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:40:01.372 [2024-07-23 04:35:10.068230] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:40:01.372 [2024-07-23 04:35:10.076262] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:40:01.960 [2024-07-23 04:35:10.458688] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:40:05.252 [2024-07-23 04:35:13.353846] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:40:05.252 [2024-07-23 04:35:13.353917] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:40:05.252 [2024-07-23 04:35:13.353937] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:05.252 [2024-07-23 04:35:13.361865] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:40:05.252 [2024-07-23 04:35:13.361901] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:40:05.252 [2024-07-23 04:35:13.361917] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:05.252 [2024-07-23 04:35:13.369913] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:40:05.252 [2024-07-23 04:35:13.369965] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:40:05.252 [2024-07-23 04:35:13.369981] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:05.252 [2024-07-23 04:35:13.377902] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:40:05.252 [2024-07-23 04:35:13.377934] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:40:05.252 [2024-07-23 04:35:13.377948] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:05.252 04:35:13 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:05.252 04:35:13 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:40:05.252 04:35:13 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:40:05.252 I/O targets: 00:40:05.252 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:40:05.252 crypto_ram2: 65536 blocks of 512 bytes (32 MiB) 00:40:05.252 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:40:05.252 crypto_ram4: 8192 blocks of 4096 bytes (32 MiB) 00:40:05.252 00:40:05.252 00:40:05.252 CUnit - A unit testing framework for C - Version 2.1-3 00:40:05.252 http://cunit.sourceforge.net/ 00:40:05.252 00:40:05.252 00:40:05.252 Suite: bdevio tests on: crypto_ram4 00:40:05.252 Test: blockdev write read block ...passed 00:40:05.252 Test: blockdev write zeroes read block ...passed 00:40:05.252 Test: blockdev write zeroes read no split ...passed 00:40:05.252 Test: blockdev write zeroes read split ...passed 00:40:05.512 Test: blockdev write zeroes read split partial ...passed 00:40:05.512 Test: blockdev reset ...passed 00:40:05.512 Test: blockdev write read 8 blocks ...passed 00:40:05.512 Test: blockdev write read size > 128k ...passed 00:40:05.512 Test: blockdev write read invalid size ...passed 00:40:05.512 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:05.512 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:05.512 Test: blockdev write read max offset ...passed 00:40:05.512 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:05.512 Test: blockdev writev readv 8 blocks ...passed 00:40:05.512 Test: blockdev writev readv 30 x 1block ...passed 00:40:05.512 Test: blockdev writev readv block ...passed 00:40:05.512 Test: blockdev writev readv size > 128k ...passed 00:40:05.512 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:05.512 Test: blockdev comparev and writev ...passed 00:40:05.512 Test: blockdev nvme passthru rw ...passed 00:40:05.512 Test: blockdev nvme passthru vendor specific ...passed 00:40:05.512 Test: blockdev nvme admin passthru ...passed 00:40:05.512 Test: blockdev copy ...passed 00:40:05.512 Suite: bdevio tests on: crypto_ram3 00:40:05.512 Test: blockdev write read block ...passed 00:40:05.512 Test: blockdev write zeroes read block ...passed 00:40:05.512 Test: blockdev write zeroes read no split ...passed 00:40:05.512 Test: blockdev write zeroes read split ...passed 00:40:05.512 Test: blockdev write zeroes read split partial ...passed 00:40:05.512 Test: blockdev reset ...passed 00:40:05.512 Test: blockdev write read 8 blocks ...passed 00:40:05.512 Test: blockdev write read size > 128k ...passed 00:40:05.512 Test: blockdev write read invalid size ...passed 00:40:05.512 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:05.512 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:05.512 Test: blockdev write read max offset ...passed 00:40:05.512 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:05.512 Test: blockdev writev readv 8 blocks ...passed 00:40:05.512 Test: blockdev writev readv 30 x 1block ...passed 00:40:05.512 Test: blockdev writev readv block ...passed 00:40:05.512 Test: blockdev writev readv size > 128k ...passed 00:40:05.512 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:05.512 Test: blockdev comparev and writev ...passed 00:40:05.512 Test: blockdev nvme passthru rw ...passed 00:40:05.512 Test: blockdev nvme passthru vendor specific ...passed 00:40:05.512 Test: blockdev nvme admin passthru ...passed 00:40:05.512 Test: blockdev copy ...passed 00:40:05.512 Suite: bdevio tests on: crypto_ram2 00:40:05.512 Test: blockdev write read block ...passed 00:40:05.512 Test: blockdev write zeroes read block ...passed 00:40:05.512 Test: blockdev write zeroes read no split ...passed 00:40:05.512 Test: blockdev write zeroes read split ...passed 00:40:05.772 Test: blockdev write zeroes read split partial ...passed 00:40:05.772 Test: blockdev reset ...passed 00:40:05.772 Test: blockdev write read 8 blocks ...passed 00:40:05.772 Test: blockdev write read size > 128k ...passed 00:40:05.772 Test: blockdev write read invalid size ...passed 00:40:05.772 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:05.772 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:05.772 Test: blockdev write read max offset ...passed 00:40:05.772 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:05.772 Test: blockdev writev readv 8 blocks ...passed 00:40:05.772 Test: blockdev writev readv 30 x 1block ...passed 00:40:05.772 Test: blockdev writev readv block ...passed 00:40:05.772 Test: blockdev writev readv size > 128k ...passed 00:40:05.772 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:05.772 Test: blockdev comparev and writev ...passed 00:40:05.772 Test: blockdev nvme passthru rw ...passed 00:40:05.772 Test: blockdev nvme passthru vendor specific ...passed 00:40:05.772 Test: blockdev nvme admin passthru ...passed 00:40:05.772 Test: blockdev copy ...passed 00:40:05.772 Suite: bdevio tests on: crypto_ram 00:40:05.772 Test: blockdev write read block ...passed 00:40:05.772 Test: blockdev write zeroes read block ...passed 00:40:05.772 Test: blockdev write zeroes read no split ...passed 00:40:05.772 Test: blockdev write zeroes read split ...passed 00:40:05.772 Test: blockdev write zeroes read split partial ...passed 00:40:05.772 Test: blockdev reset ...passed 00:40:05.772 Test: blockdev write read 8 blocks ...passed 00:40:05.772 Test: blockdev write read size > 128k ...passed 00:40:05.772 Test: blockdev write read invalid size ...passed 00:40:05.772 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:05.772 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:05.772 Test: blockdev write read max offset ...passed 00:40:05.772 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:05.772 Test: blockdev writev readv 8 blocks ...passed 00:40:05.772 Test: blockdev writev readv 30 x 1block ...passed 00:40:05.772 Test: blockdev writev readv block ...passed 00:40:05.772 Test: blockdev writev readv size > 128k ...passed 00:40:05.772 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:05.772 Test: blockdev comparev and writev ...passed 00:40:05.772 Test: blockdev nvme passthru rw ...passed 00:40:05.772 Test: blockdev nvme passthru vendor specific ...passed 00:40:05.772 Test: blockdev nvme admin passthru ...passed 00:40:05.772 Test: blockdev copy ...passed 00:40:05.772 00:40:05.772 Run Summary: Type Total Ran Passed Failed Inactive 00:40:05.772 suites 4 4 n/a 0 0 00:40:05.772 tests 92 92 92 0 0 00:40:05.772 asserts 520 520 520 0 n/a 00:40:05.772 00:40:05.772 Elapsed time = 1.488 seconds 00:40:05.772 0 00:40:05.772 04:35:14 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 2895215 00:40:05.772 04:35:14 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 2895215 ']' 00:40:05.772 04:35:14 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 2895215 00:40:05.772 04:35:14 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:40:06.032 04:35:14 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:06.032 04:35:14 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2895215 00:40:06.032 04:35:14 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:06.032 04:35:14 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:06.032 04:35:14 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2895215' 00:40:06.032 killing process with pid 2895215 00:40:06.032 04:35:14 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@967 -- # kill 2895215 00:40:06.032 04:35:14 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@972 -- # wait 2895215 00:40:08.569 04:35:17 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:40:08.569 00:40:08.569 real 0m7.827s 00:40:08.569 user 0m21.196s 00:40:08.569 sys 0m0.779s 00:40:08.569 04:35:17 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:40:08.570 ************************************ 00:40:08.570 END TEST bdev_bounds 00:40:08.570 ************************************ 00:40:08.570 04:35:17 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:40:08.570 04:35:17 blockdev_crypto_aesni -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:40:08.570 04:35:17 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:40:08.570 04:35:17 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:08.570 04:35:17 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:40:08.570 ************************************ 00:40:08.570 START TEST bdev_nbd 00:40:08.570 ************************************ 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=4 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=4 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=2896546 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 2896546 /var/tmp/spdk-nbd.sock 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 2896546 ']' 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:40:08.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:08.570 04:35:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:40:08.830 [2024-07-23 04:35:17.442315] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:40:08.830 [2024-07-23 04:35:17.442431] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3d:01.0 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3d:01.1 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3d:01.2 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3d:01.3 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3d:01.4 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3d:01.5 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3d:01.6 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3d:01.7 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3d:02.0 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3d:02.1 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3d:02.2 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3d:02.3 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3d:02.4 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3d:02.5 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3d:02.6 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3d:02.7 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3f:01.0 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3f:01.1 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3f:01.2 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3f:01.3 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3f:01.4 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3f:01.5 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3f:01.6 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3f:01.7 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3f:02.0 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3f:02.1 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3f:02.2 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3f:02.3 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3f:02.4 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3f:02.5 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3f:02.6 cannot be used 00:40:08.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:08.830 EAL: Requested device 0000:3f:02.7 cannot be used 00:40:09.089 [2024-07-23 04:35:17.668782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:09.349 [2024-07-23 04:35:17.958611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:09.349 [2024-07-23 04:35:17.980389] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:40:09.349 [2024-07-23 04:35:17.988424] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:40:09.349 [2024-07-23 04:35:17.996452] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:40:09.608 [2024-07-23 04:35:18.379514] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:40:12.898 [2024-07-23 04:35:21.301979] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:40:12.898 [2024-07-23 04:35:21.302056] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:40:12.898 [2024-07-23 04:35:21.302078] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:12.898 [2024-07-23 04:35:21.309996] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:40:12.898 [2024-07-23 04:35:21.310035] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:40:12.898 [2024-07-23 04:35:21.310051] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:12.898 [2024-07-23 04:35:21.318032] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:40:12.898 [2024-07-23 04:35:21.318074] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:40:12.898 [2024-07-23 04:35:21.318089] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:12.898 [2024-07-23 04:35:21.326011] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:40:12.898 [2024-07-23 04:35:21.326068] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:40:12.898 [2024-07-23 04:35:21.326084] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:13.158 04:35:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:13.158 04:35:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:40:13.158 04:35:21 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:40:13.158 04:35:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:13.158 04:35:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:40:13.158 04:35:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:40:13.158 04:35:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:40:13.158 04:35:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:13.158 04:35:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:40:13.158 04:35:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:40:13.158 04:35:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:40:13.158 04:35:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:40:13.158 04:35:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:40:13.158 04:35:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:40:13.158 04:35:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:40:13.417 04:35:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:40:13.417 04:35:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:40:13.417 04:35:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:40:13.417 04:35:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:40:13.417 04:35:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:40:13.417 04:35:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:40:13.417 04:35:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:40:13.417 04:35:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:40:13.417 04:35:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:40:13.417 04:35:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:40:13.417 04:35:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:40:13.417 04:35:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:13.417 1+0 records in 00:40:13.417 1+0 records out 00:40:13.417 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336756 s, 12.2 MB/s 00:40:13.417 04:35:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:13.417 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:40:13.417 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:13.417 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:40:13.417 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:40:13.417 04:35:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:40:13.417 04:35:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:40:13.417 04:35:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:40:13.677 04:35:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:40:13.677 04:35:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:40:13.677 04:35:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:40:13.677 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:40:13.677 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:40:13.677 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:40:13.677 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:40:13.677 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:40:13.677 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:40:13.677 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:40:13.677 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:40:13.677 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:13.677 1+0 records in 00:40:13.677 1+0 records out 00:40:13.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335436 s, 12.2 MB/s 00:40:13.677 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:13.677 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:40:13.677 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:13.677 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:40:13.677 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:40:13.677 04:35:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:40:13.677 04:35:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:40:13.677 04:35:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:40:13.936 04:35:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:40:13.936 04:35:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:40:13.936 04:35:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:40:13.936 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:40:13.936 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:40:13.937 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:40:13.937 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:40:13.937 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:40:13.937 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:40:13.937 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:40:13.937 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:40:13.937 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:13.937 1+0 records in 00:40:13.937 1+0 records out 00:40:13.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360777 s, 11.4 MB/s 00:40:13.937 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:13.937 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:40:13.937 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:13.937 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:40:13.937 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:40:13.937 04:35:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:40:13.937 04:35:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:40:13.937 04:35:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 00:40:14.196 04:35:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:40:14.196 04:35:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:40:14.196 04:35:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:40:14.196 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:40:14.196 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:40:14.196 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:40:14.196 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:40:14.196 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:40:14.196 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:40:14.196 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:40:14.196 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:40:14.196 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:14.196 1+0 records in 00:40:14.196 1+0 records out 00:40:14.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358391 s, 11.4 MB/s 00:40:14.196 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:14.196 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:40:14.196 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:14.196 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:40:14.196 04:35:22 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:40:14.196 04:35:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:40:14.196 04:35:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:40:14.196 04:35:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:14.456 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:40:14.456 { 00:40:14.456 "nbd_device": "/dev/nbd0", 00:40:14.456 "bdev_name": "crypto_ram" 00:40:14.456 }, 00:40:14.456 { 00:40:14.456 "nbd_device": "/dev/nbd1", 00:40:14.456 "bdev_name": "crypto_ram2" 00:40:14.456 }, 00:40:14.456 { 00:40:14.456 "nbd_device": "/dev/nbd2", 00:40:14.456 "bdev_name": "crypto_ram3" 00:40:14.456 }, 00:40:14.456 { 00:40:14.456 "nbd_device": "/dev/nbd3", 00:40:14.456 "bdev_name": "crypto_ram4" 00:40:14.456 } 00:40:14.456 ]' 00:40:14.456 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:40:14.456 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:40:14.456 { 00:40:14.456 "nbd_device": "/dev/nbd0", 00:40:14.456 "bdev_name": "crypto_ram" 00:40:14.456 }, 00:40:14.456 { 00:40:14.456 "nbd_device": "/dev/nbd1", 00:40:14.456 "bdev_name": "crypto_ram2" 00:40:14.456 }, 00:40:14.456 { 00:40:14.456 "nbd_device": "/dev/nbd2", 00:40:14.456 "bdev_name": "crypto_ram3" 00:40:14.456 }, 00:40:14.456 { 00:40:14.456 "nbd_device": "/dev/nbd3", 00:40:14.456 "bdev_name": "crypto_ram4" 00:40:14.456 } 00:40:14.456 ]' 00:40:14.456 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:40:14.456 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:40:14.456 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:14.456 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:40:14.456 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:14.456 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:40:14.456 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:14.456 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:40:14.714 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:14.715 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:14.715 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:14.715 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:14.715 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:14.715 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:14.715 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:14.715 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:14.715 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:14.715 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:40:14.973 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:40:14.973 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:40:14.973 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:40:14.973 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:14.973 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:14.973 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:40:14.973 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:14.973 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:14.973 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:14.973 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:40:15.231 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:40:15.231 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:40:15.231 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:40:15.231 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:15.231 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:15.231 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:40:15.231 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:15.231 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:15.231 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:15.231 04:35:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:40:15.490 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:40:15.490 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:40:15.490 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:40:15.490 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:15.490 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:15.490 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:40:15.490 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:15.490 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:15.490 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:40:15.490 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:15.490 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:15.490 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:40:15.749 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:40:16.007 /dev/nbd0 00:40:16.007 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:16.007 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:16.007 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:40:16.007 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:40:16.007 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:40:16.007 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:40:16.007 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:40:16.007 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:40:16.007 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:40:16.007 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:40:16.007 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:16.008 1+0 records in 00:40:16.008 1+0 records out 00:40:16.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335583 s, 12.2 MB/s 00:40:16.008 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:16.008 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:40:16.008 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:16.008 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:40:16.008 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:40:16.008 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:16.008 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:40:16.008 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd1 00:40:16.266 /dev/nbd1 00:40:16.266 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:40:16.266 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:40:16.266 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:40:16.266 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:40:16.266 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:40:16.266 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:40:16.266 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:40:16.266 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:40:16.266 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:40:16.266 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:40:16.266 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:16.266 1+0 records in 00:40:16.266 1+0 records out 00:40:16.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339389 s, 12.1 MB/s 00:40:16.266 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:16.266 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:40:16.266 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:16.266 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:40:16.266 04:35:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:40:16.266 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:16.266 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:40:16.266 04:35:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd10 00:40:16.524 /dev/nbd10 00:40:16.524 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:40:16.524 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:40:16.524 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:40:16.524 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:40:16.524 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:40:16.524 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:40:16.524 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:40:16.524 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:40:16.524 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:40:16.524 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:40:16.524 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:16.524 1+0 records in 00:40:16.524 1+0 records out 00:40:16.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354486 s, 11.6 MB/s 00:40:16.524 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:16.524 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:40:16.524 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:16.524 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:40:16.524 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:40:16.524 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:16.524 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:40:16.524 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 /dev/nbd11 00:40:16.783 /dev/nbd11 00:40:16.783 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:40:16.783 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:40:16.783 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:40:16.783 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:40:16.783 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:40:16.783 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:40:16.783 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:40:16.783 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:40:16.783 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:40:16.783 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:40:16.783 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:16.783 1+0 records in 00:40:16.783 1+0 records out 00:40:16.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378619 s, 10.8 MB/s 00:40:16.783 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:16.783 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:40:16.783 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:16.783 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:40:16.783 04:35:25 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:40:16.783 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:16.783 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:40:16.783 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:40:16.783 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:16.783 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:17.042 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:40:17.042 { 00:40:17.042 "nbd_device": "/dev/nbd0", 00:40:17.042 "bdev_name": "crypto_ram" 00:40:17.042 }, 00:40:17.042 { 00:40:17.042 "nbd_device": "/dev/nbd1", 00:40:17.042 "bdev_name": "crypto_ram2" 00:40:17.042 }, 00:40:17.042 { 00:40:17.042 "nbd_device": "/dev/nbd10", 00:40:17.042 "bdev_name": "crypto_ram3" 00:40:17.042 }, 00:40:17.042 { 00:40:17.042 "nbd_device": "/dev/nbd11", 00:40:17.042 "bdev_name": "crypto_ram4" 00:40:17.042 } 00:40:17.042 ]' 00:40:17.042 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:40:17.042 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:40:17.042 { 00:40:17.042 "nbd_device": "/dev/nbd0", 00:40:17.042 "bdev_name": "crypto_ram" 00:40:17.042 }, 00:40:17.042 { 00:40:17.042 "nbd_device": "/dev/nbd1", 00:40:17.042 "bdev_name": "crypto_ram2" 00:40:17.042 }, 00:40:17.042 { 00:40:17.042 "nbd_device": "/dev/nbd10", 00:40:17.042 "bdev_name": "crypto_ram3" 00:40:17.042 }, 00:40:17.042 { 00:40:17.042 "nbd_device": "/dev/nbd11", 00:40:17.042 "bdev_name": "crypto_ram4" 00:40:17.042 } 00:40:17.042 ]' 00:40:17.042 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:40:17.042 /dev/nbd1 00:40:17.042 /dev/nbd10 00:40:17.042 /dev/nbd11' 00:40:17.042 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:40:17.042 /dev/nbd1 00:40:17.042 /dev/nbd10 00:40:17.042 /dev/nbd11' 00:40:17.042 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:40:17.042 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:40:17.042 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:40:17.042 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:40:17.042 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:40:17.042 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:40:17.042 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:40:17.042 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:40:17.042 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:40:17.042 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:40:17.042 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:40:17.043 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:40:17.043 256+0 records in 00:40:17.043 256+0 records out 00:40:17.043 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113377 s, 92.5 MB/s 00:40:17.043 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:40:17.043 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:40:17.043 256+0 records in 00:40:17.043 256+0 records out 00:40:17.043 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0483049 s, 21.7 MB/s 00:40:17.043 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:40:17.043 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:40:17.043 256+0 records in 00:40:17.043 256+0 records out 00:40:17.043 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0544263 s, 19.3 MB/s 00:40:17.043 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:40:17.043 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:40:17.302 256+0 records in 00:40:17.302 256+0 records out 00:40:17.302 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0460317 s, 22.8 MB/s 00:40:17.302 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:40:17.302 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:40:17.302 256+0 records in 00:40:17.302 256+0 records out 00:40:17.302 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.04156 s, 25.2 MB/s 00:40:17.302 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:40:17.302 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:40:17.302 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:40:17.302 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:40:17.302 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:40:17.302 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:40:17.302 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:40:17.302 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:40:17.302 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:40:17.302 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:40:17.302 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:40:17.302 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:40:17.302 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:40:17.302 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:40:17.302 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:40:17.302 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:40:17.302 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:40:17.303 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:17.303 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:40:17.303 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:17.303 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:40:17.303 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:17.303 04:35:25 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:40:17.562 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:17.562 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:17.562 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:17.562 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:17.562 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:17.562 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:17.562 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:17.562 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:17.562 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:17.562 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:40:17.821 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:40:17.821 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:40:17.821 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:40:17.821 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:17.821 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:17.821 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:40:17.821 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:17.821 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:17.821 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:17.821 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:40:18.154 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:40:18.154 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:40:18.154 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:40:18.154 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:18.154 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:18.154 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:40:18.154 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:18.154 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:18.154 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:18.154 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:40:18.154 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:40:18.154 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:40:18.154 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:40:18.154 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:18.154 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:18.154 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:40:18.154 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:18.154 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:18.154 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:40:18.154 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:18.154 04:35:26 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:18.412 04:35:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:40:18.412 04:35:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:40:18.413 04:35:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:40:18.671 04:35:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:40:18.671 04:35:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:40:18.671 04:35:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:40:18.671 04:35:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:40:18.671 04:35:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:40:18.671 04:35:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:40:18.671 04:35:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:40:18.671 04:35:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:40:18.671 04:35:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:40:18.671 04:35:27 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:40:18.672 04:35:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:18.672 04:35:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:40:18.672 04:35:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:40:18.672 04:35:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:40:18.672 04:35:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:40:18.672 malloc_lvol_verify 00:40:18.930 04:35:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:40:18.930 894d897f-ef74-495d-83ed-f11903a0802f 00:40:18.930 04:35:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:40:19.189 e21a349e-5238-4904-9fd5-79112c654ff6 00:40:19.189 04:35:27 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:40:19.448 /dev/nbd0 00:40:19.448 04:35:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:40:19.448 mke2fs 1.46.5 (30-Dec-2021) 00:40:19.448 Discarding device blocks: 0/4096 done 00:40:19.448 Creating filesystem with 4096 1k blocks and 1024 inodes 00:40:19.448 00:40:19.448 Allocating group tables: 0/1 done 00:40:19.448 Writing inode tables: 0/1 done 00:40:19.448 Creating journal (1024 blocks): done 00:40:19.448 Writing superblocks and filesystem accounting information: 0/1 done 00:40:19.448 00:40:19.448 04:35:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:40:19.448 04:35:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:40:19.448 04:35:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:19.448 04:35:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:40:19.448 04:35:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:19.448 04:35:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:40:19.448 04:35:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:19.448 04:35:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:40:19.707 04:35:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:19.707 04:35:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:19.707 04:35:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:19.707 04:35:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:19.707 04:35:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:19.707 04:35:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:19.707 04:35:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:19.707 04:35:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:19.707 04:35:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:40:19.707 04:35:28 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:40:19.707 04:35:28 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 2896546 00:40:19.707 04:35:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 2896546 ']' 00:40:19.707 04:35:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 2896546 00:40:19.707 04:35:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:40:19.707 04:35:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:19.707 04:35:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2896546 00:40:19.708 04:35:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:19.708 04:35:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:19.708 04:35:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2896546' 00:40:19.708 killing process with pid 2896546 00:40:19.708 04:35:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@967 -- # kill 2896546 00:40:19.708 04:35:28 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@972 -- # wait 2896546 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:40:23.033 00:40:23.033 real 0m13.911s 00:40:23.033 user 0m16.830s 00:40:23.033 sys 0m4.078s 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:40:23.033 ************************************ 00:40:23.033 END TEST bdev_nbd 00:40:23.033 ************************************ 00:40:23.033 04:35:31 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:40:23.033 04:35:31 blockdev_crypto_aesni -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:40:23.033 04:35:31 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # '[' crypto_aesni = nvme ']' 00:40:23.033 04:35:31 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # '[' crypto_aesni = gpt ']' 00:40:23.033 04:35:31 blockdev_crypto_aesni -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:40:23.033 04:35:31 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:40:23.033 04:35:31 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:23.033 04:35:31 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:40:23.033 ************************************ 00:40:23.033 START TEST bdev_fio 00:40:23.033 ************************************ 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:40:23.033 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram2]' 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram2 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram4]' 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram4 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:23.033 04:35:31 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:40:23.033 ************************************ 00:40:23.033 START TEST bdev_fio_rw_verify 00:40:23.033 ************************************ 00:40:23.034 04:35:31 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:23.034 04:35:31 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:23.034 04:35:31 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:23.034 04:35:31 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:23.034 04:35:31 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:23.034 04:35:31 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:40:23.034 04:35:31 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:40:23.034 04:35:31 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:23.034 04:35:31 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:23.034 04:35:31 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:40:23.034 04:35:31 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:40:23.034 04:35:31 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:23.034 04:35:31 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:23.034 04:35:31 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:23.034 04:35:31 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:40:23.034 04:35:31 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:23.034 04:35:31 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:23.299 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:23.300 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:23.300 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:23.300 job_crypto_ram4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:23.300 fio-3.35 00:40:23.300 Starting 4 threads 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3d:01.0 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3d:01.1 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3d:01.2 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3d:01.3 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3d:01.4 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3d:01.5 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3d:01.6 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3d:01.7 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3d:02.0 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3d:02.1 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3d:02.2 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3d:02.3 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3d:02.4 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3d:02.5 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3d:02.6 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3d:02.7 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3f:01.0 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3f:01.1 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3f:01.2 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3f:01.3 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3f:01.4 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3f:01.5 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3f:01.6 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3f:01.7 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3f:02.0 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3f:02.1 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3f:02.2 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3f:02.3 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3f:02.4 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3f:02.5 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3f:02.6 cannot be used 00:40:23.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:23.300 EAL: Requested device 0000:3f:02.7 cannot be used 00:40:38.170 00:40:38.170 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=2899777: Tue Jul 23 04:35:45 2024 00:40:38.170 read: IOPS=22.3k, BW=86.9MiB/s (91.1MB/s)(869MiB/10001msec) 00:40:38.170 slat (usec): min=18, max=408, avg=58.44, stdev=27.14 00:40:38.170 clat (usec): min=14, max=1291, avg=330.88, stdev=193.07 00:40:38.170 lat (usec): min=52, max=1416, avg=389.32, stdev=208.57 00:40:38.170 clat percentiles (usec): 00:40:38.170 | 50.000th=[ 289], 99.000th=[ 914], 99.900th=[ 1057], 99.990th=[ 1139], 00:40:38.170 | 99.999th=[ 1205] 00:40:38.170 write: IOPS=24.5k, BW=95.6MiB/s (100MB/s)(931MiB/9742msec); 0 zone resets 00:40:38.170 slat (usec): min=27, max=297, avg=70.82, stdev=25.95 00:40:38.170 clat (usec): min=44, max=2682, avg=396.98, stdev=227.49 00:40:38.170 lat (usec): min=90, max=2826, avg=467.81, stdev=241.59 00:40:38.170 clat percentiles (usec): 00:40:38.170 | 50.000th=[ 359], 99.000th=[ 1090], 99.900th=[ 1336], 99.990th=[ 1483], 00:40:38.170 | 99.999th=[ 2147] 00:40:38.170 bw ( KiB/s): min=77568, max=121064, per=97.31%, avg=95237.05, stdev=2616.68, samples=76 00:40:38.170 iops : min=19392, max=30266, avg=23809.26, stdev=654.17, samples=76 00:40:38.170 lat (usec) : 20=0.01%, 50=0.01%, 100=5.60%, 250=28.96%, 500=43.50% 00:40:38.170 lat (usec) : 750=15.65%, 1000=5.04% 00:40:38.170 lat (msec) : 2=1.24%, 4=0.01% 00:40:38.170 cpu : usr=99.27%, sys=0.24%, ctx=56, majf=0, minf=23983 00:40:38.170 IO depths : 1=10.1%, 2=25.6%, 4=51.2%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:38.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:38.170 complete : 0=0.0%, 4=88.7%, 8=11.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:38.170 issued rwts: total=222542,238350,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:38.170 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:38.170 00:40:38.170 Run status group 0 (all jobs): 00:40:38.170 READ: bw=86.9MiB/s (91.1MB/s), 86.9MiB/s-86.9MiB/s (91.1MB/s-91.1MB/s), io=869MiB (912MB), run=10001-10001msec 00:40:38.170 WRITE: bw=95.6MiB/s (100MB/s), 95.6MiB/s-95.6MiB/s (100MB/s-100MB/s), io=931MiB (976MB), run=9742-9742msec 00:40:40.073 ----------------------------------------------------- 00:40:40.073 Suppressions used: 00:40:40.073 count bytes template 00:40:40.073 4 47 /usr/src/fio/parse.c 00:40:40.073 1617 155232 /usr/src/fio/iolog.c 00:40:40.073 1 8 libtcmalloc_minimal.so 00:40:40.073 1 904 libcrypto.so 00:40:40.073 ----------------------------------------------------- 00:40:40.073 00:40:40.073 00:40:40.073 real 0m17.222s 00:40:40.073 user 0m57.891s 00:40:40.073 sys 0m0.991s 00:40:40.073 04:35:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:40.073 04:35:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:40:40.073 ************************************ 00:40:40.073 END TEST bdev_fio_rw_verify 00:40:40.073 ************************************ 00:40:40.073 04:35:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:40:40.073 04:35:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:40:40.073 04:35:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:40:40.073 04:35:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:40:40.073 04:35:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:40:40.073 04:35:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:40:40.073 04:35:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:40:40.073 04:35:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:40:40.073 04:35:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:40:40.073 04:35:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "bb78b5ce-d493-534b-bcc8-5160a6f4f0aa"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "bb78b5ce-d493-534b-bcc8-5160a6f4f0aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "9dadcafd-dec6-52d1-8244-8c60852f8d04"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9dadcafd-dec6-52d1-8244-8c60852f8d04",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "ede54529-69de-5e5a-beb3-61dab80d7e85"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "ede54529-69de-5e5a-beb3-61dab80d7e85",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "a7ff7627-4840-59eb-a927-004ccdfe5be9"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "a7ff7627-4840-59eb-a927-004ccdfe5be9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:40:40.074 crypto_ram2 00:40:40.074 crypto_ram3 00:40:40.074 crypto_ram4 ]] 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "bb78b5ce-d493-534b-bcc8-5160a6f4f0aa"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "bb78b5ce-d493-534b-bcc8-5160a6f4f0aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "9dadcafd-dec6-52d1-8244-8c60852f8d04"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9dadcafd-dec6-52d1-8244-8c60852f8d04",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "ede54529-69de-5e5a-beb3-61dab80d7e85"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "ede54529-69de-5e5a-beb3-61dab80d7e85",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "a7ff7627-4840-59eb-a927-004ccdfe5be9"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "a7ff7627-4840-59eb-a927-004ccdfe5be9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram2]' 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram2 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram4]' 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram4 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:40:40.074 ************************************ 00:40:40.074 START TEST bdev_fio_trim 00:40:40.074 ************************************ 00:40:40.074 04:35:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:40.075 04:35:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:40.075 04:35:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:40.075 04:35:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:40.075 04:35:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:40.075 04:35:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:40:40.075 04:35:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:40:40.075 04:35:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:40.075 04:35:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:40.075 04:35:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:40:40.075 04:35:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:40:40.075 04:35:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:40.333 04:35:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:40.333 04:35:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:40.333 04:35:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:40:40.333 04:35:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:40.333 04:35:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:40.591 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:40.591 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:40.591 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:40.591 job_crypto_ram4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:40.591 fio-3.35 00:40:40.591 Starting 4 threads 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3d:01.0 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3d:01.1 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3d:01.2 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3d:01.3 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3d:01.4 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3d:01.5 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3d:01.6 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3d:01.7 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3d:02.0 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3d:02.1 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3d:02.2 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3d:02.3 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3d:02.4 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3d:02.5 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3d:02.6 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3d:02.7 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3f:01.0 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3f:01.1 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3f:01.2 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3f:01.3 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3f:01.4 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3f:01.5 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3f:01.6 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3f:01.7 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3f:02.0 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3f:02.1 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3f:02.2 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3f:02.3 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3f:02.4 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3f:02.5 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3f:02.6 cannot be used 00:40:40.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:40.850 EAL: Requested device 0000:3f:02.7 cannot be used 00:40:55.764 00:40:55.764 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=2902797: Tue Jul 23 04:36:03 2024 00:40:55.764 write: IOPS=34.0k, BW=133MiB/s (139MB/s)(1329MiB/10001msec); 0 zone resets 00:40:55.764 slat (usec): min=17, max=541, avg=65.58, stdev=31.47 00:40:55.764 clat (usec): min=35, max=1940, avg=303.34, stdev=175.81 00:40:55.764 lat (usec): min=53, max=2149, avg=368.92, stdev=194.81 00:40:55.764 clat percentiles (usec): 00:40:55.764 | 50.000th=[ 258], 99.000th=[ 857], 99.900th=[ 963], 99.990th=[ 1074], 00:40:55.764 | 99.999th=[ 1598] 00:40:55.764 bw ( KiB/s): min=133520, max=158800, per=100.00%, avg=136213.05, stdev=1704.85, samples=76 00:40:55.764 iops : min=33380, max=39700, avg=34053.26, stdev=426.21, samples=76 00:40:55.764 trim: IOPS=34.0k, BW=133MiB/s (139MB/s)(1329MiB/10001msec); 0 zone resets 00:40:55.764 slat (usec): min=6, max=422, avg=17.50, stdev= 6.56 00:40:55.764 clat (usec): min=53, max=1184, avg=285.51, stdev=126.60 00:40:55.764 lat (usec): min=59, max=1224, avg=303.01, stdev=129.37 00:40:55.764 clat percentiles (usec): 00:40:55.764 | 50.000th=[ 265], 99.000th=[ 594], 99.900th=[ 685], 99.990th=[ 799], 00:40:55.764 | 99.999th=[ 1074] 00:40:55.764 bw ( KiB/s): min=133512, max=158824, per=100.00%, avg=136214.32, stdev=1706.10, samples=76 00:40:55.764 iops : min=33378, max=39706, avg=34053.58, stdev=426.52, samples=76 00:40:55.764 lat (usec) : 50=0.01%, 100=4.66%, 250=42.01%, 500=42.93%, 750=9.04% 00:40:55.764 lat (usec) : 1000=1.34% 00:40:55.764 lat (msec) : 2=0.02% 00:40:55.764 cpu : usr=99.39%, sys=0.06%, ctx=78, majf=0, minf=7679 00:40:55.764 IO depths : 1=7.5%, 2=26.4%, 4=52.9%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:55.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:55.764 complete : 0=0.0%, 4=88.3%, 8=11.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:55.764 issued rwts: total=0,340238,340239,0 short=0,0,0,0 dropped=0,0,0,0 00:40:55.764 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:55.764 00:40:55.764 Run status group 0 (all jobs): 00:40:55.764 WRITE: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=1329MiB (1394MB), run=10001-10001msec 00:40:55.764 TRIM: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=1329MiB (1394MB), run=10001-10001msec 00:40:57.667 ----------------------------------------------------- 00:40:57.667 Suppressions used: 00:40:57.667 count bytes template 00:40:57.667 4 47 /usr/src/fio/parse.c 00:40:57.667 1 8 libtcmalloc_minimal.so 00:40:57.667 1 904 libcrypto.so 00:40:57.667 ----------------------------------------------------- 00:40:57.667 00:40:57.667 00:40:57.667 real 0m17.322s 00:40:57.667 user 0m57.938s 00:40:57.667 sys 0m0.863s 00:40:57.667 04:36:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:57.667 04:36:06 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:40:57.667 ************************************ 00:40:57.667 END TEST bdev_fio_trim 00:40:57.667 ************************************ 00:40:57.667 04:36:06 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:40:57.667 04:36:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:40:57.667 04:36:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:40:57.667 04:36:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:40:57.667 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:40:57.667 04:36:06 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:40:57.667 00:40:57.667 real 0m34.899s 00:40:57.667 user 1m56.010s 00:40:57.667 sys 0m2.052s 00:40:57.667 04:36:06 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:57.667 04:36:06 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:40:57.667 ************************************ 00:40:57.667 END TEST bdev_fio 00:40:57.667 ************************************ 00:40:57.667 04:36:06 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:40:57.667 04:36:06 blockdev_crypto_aesni -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:40:57.667 04:36:06 blockdev_crypto_aesni -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:40:57.667 04:36:06 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:40:57.667 04:36:06 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:57.667 04:36:06 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:40:57.667 ************************************ 00:40:57.667 START TEST bdev_verify 00:40:57.667 ************************************ 00:40:57.667 04:36:06 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:40:57.667 [2024-07-23 04:36:06.409712] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:40:57.667 [2024-07-23 04:36:06.409830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2905464 ] 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3d:01.0 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3d:01.1 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3d:01.2 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3d:01.3 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3d:01.4 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3d:01.5 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3d:01.6 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3d:01.7 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3d:02.0 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3d:02.1 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3d:02.2 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3d:02.3 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3d:02.4 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3d:02.5 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3d:02.6 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3d:02.7 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3f:01.0 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3f:01.1 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3f:01.2 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3f:01.3 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3f:01.4 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3f:01.5 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3f:01.6 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3f:01.7 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3f:02.0 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3f:02.1 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3f:02.2 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3f:02.3 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3f:02.4 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3f:02.5 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3f:02.6 cannot be used 00:40:57.926 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:57.926 EAL: Requested device 0000:3f:02.7 cannot be used 00:40:57.926 [2024-07-23 04:36:06.636578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:58.185 [2024-07-23 04:36:06.929563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:58.185 [2024-07-23 04:36:06.929570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:58.185 [2024-07-23 04:36:06.951381] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:40:58.185 [2024-07-23 04:36:06.959419] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:40:58.185 [2024-07-23 04:36:06.967427] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:40:58.751 [2024-07-23 04:36:07.356345] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:41:02.034 [2024-07-23 04:36:10.228543] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:41:02.035 [2024-07-23 04:36:10.228616] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:41:02.035 [2024-07-23 04:36:10.228635] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:02.035 [2024-07-23 04:36:10.236563] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:41:02.035 [2024-07-23 04:36:10.236600] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:41:02.035 [2024-07-23 04:36:10.236618] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:02.035 [2024-07-23 04:36:10.244619] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:41:02.035 [2024-07-23 04:36:10.244656] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:41:02.035 [2024-07-23 04:36:10.244671] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:02.035 [2024-07-23 04:36:10.252599] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:41:02.035 [2024-07-23 04:36:10.252651] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:41:02.035 [2024-07-23 04:36:10.252667] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:02.035 Running I/O for 5 seconds... 00:41:07.300 00:41:07.300 Latency(us) 00:41:07.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:07.301 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:41:07.301 Verification LBA range: start 0x0 length 0x1000 00:41:07.301 crypto_ram : 5.08 475.49 1.86 0.00 0.00 268202.85 5557.45 187065.96 00:41:07.301 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:41:07.301 Verification LBA range: start 0x1000 length 0x1000 00:41:07.301 crypto_ram : 5.08 478.22 1.87 0.00 0.00 267087.78 5242.88 187904.82 00:41:07.301 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:41:07.301 Verification LBA range: start 0x0 length 0x1000 00:41:07.301 crypto_ram2 : 5.08 478.45 1.87 0.00 0.00 266037.85 5950.67 171127.60 00:41:07.301 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:41:07.301 Verification LBA range: start 0x1000 length 0x1000 00:41:07.301 crypto_ram2 : 5.08 478.33 1.87 0.00 0.00 266086.44 5321.52 171966.46 00:41:07.301 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:41:07.301 Verification LBA range: start 0x0 length 0x1000 00:41:07.301 crypto_ram3 : 5.05 3672.98 14.35 0.00 0.00 34469.79 9909.04 28730.98 00:41:07.301 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:41:07.301 Verification LBA range: start 0x1000 length 0x1000 00:41:07.301 crypto_ram3 : 5.07 3686.67 14.40 0.00 0.00 34373.89 4980.74 29569.84 00:41:07.301 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:41:07.301 Verification LBA range: start 0x0 length 0x1000 00:41:07.301 crypto_ram4 : 5.07 3688.85 14.41 0.00 0.00 34261.04 4037.02 25899.83 00:41:07.301 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:41:07.301 Verification LBA range: start 0x1000 length 0x1000 00:41:07.301 crypto_ram4 : 5.07 3687.31 14.40 0.00 0.00 34259.37 4954.52 25585.25 00:41:07.301 =================================================================================================================== 00:41:07.301 Total : 16646.28 65.02 0.00 0.00 61111.26 4037.02 187904.82 00:41:09.834 00:41:09.834 real 0m12.087s 00:41:09.834 user 0m22.283s 00:41:09.834 sys 0m0.550s 00:41:09.834 04:36:18 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:09.834 04:36:18 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:41:09.834 ************************************ 00:41:09.834 END TEST bdev_verify 00:41:09.834 ************************************ 00:41:09.834 04:36:18 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:41:09.834 04:36:18 blockdev_crypto_aesni -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:41:09.834 04:36:18 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:41:09.834 04:36:18 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:09.834 04:36:18 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:41:09.834 ************************************ 00:41:09.834 START TEST bdev_verify_big_io 00:41:09.834 ************************************ 00:41:09.834 04:36:18 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:41:09.834 [2024-07-23 04:36:18.568570] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:41:09.834 [2024-07-23 04:36:18.568684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2907339 ] 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:10.092 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.092 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:10.092 [2024-07-23 04:36:18.782452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:10.350 [2024-07-23 04:36:19.073240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:10.350 [2024-07-23 04:36:19.073244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:10.350 [2024-07-23 04:36:19.095061] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:41:10.350 [2024-07-23 04:36:19.103089] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:41:10.350 [2024-07-23 04:36:19.111099] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:41:10.917 [2024-07-23 04:36:19.491053] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:41:14.203 [2024-07-23 04:36:22.378234] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:41:14.203 [2024-07-23 04:36:22.378321] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:41:14.203 [2024-07-23 04:36:22.378341] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:14.203 [2024-07-23 04:36:22.386246] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:41:14.203 [2024-07-23 04:36:22.386285] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:41:14.203 [2024-07-23 04:36:22.386300] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:14.203 [2024-07-23 04:36:22.394293] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:41:14.203 [2024-07-23 04:36:22.394332] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:41:14.203 [2024-07-23 04:36:22.394348] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:14.203 [2024-07-23 04:36:22.402281] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:41:14.203 [2024-07-23 04:36:22.402334] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:41:14.203 [2024-07-23 04:36:22.402350] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:14.203 Running I/O for 5 seconds... 00:41:16.736 [2024-07-23 04:36:25.410545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.410685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.419235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.419315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.420802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.421845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.422249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.422620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.425179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.426704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.427163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.428792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.430870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.432478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.432873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.433274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.433605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.436108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.437133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.438431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.439707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.441655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.442201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.442593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.442983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.443282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.445883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.446664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.447941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.449501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.450981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.451394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.451788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.452979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.453304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.454713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.456289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.457984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.459590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.736 [2024-07-23 04:36:25.460375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.460777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.461443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.462725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.463021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.465336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.466618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.468146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.469674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.470543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.470946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.472607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.474229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.474524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.476835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.478389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.479918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.480851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.481741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.482979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.484247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.485778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.486069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.488738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.490359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.491818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.492222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.493029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.494475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.496115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.497675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.497965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.500589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.502134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.502652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.503048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.505119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.506589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.508130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.509794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.510229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.512740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.513787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.514204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.514598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.516240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.737 [2024-07-23 04:36:25.517757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.044 [2024-07-23 04:36:25.519245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.044 [2024-07-23 04:36:25.519972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.520270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.522901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.523314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.523708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.524288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.526237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.527765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.529031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.530060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.530413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.531944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.532360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.532756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.534452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.536400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.538067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.538598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.539918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.540217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.541569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.541978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.543090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.544353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.546312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.547009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.548612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.550024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.550326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.551792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.552386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.553660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.555177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.556866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.557902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.559174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.560698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.560992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.562543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.564149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.565587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.567104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.567901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.569284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.570797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.572321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.572615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.574618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.575889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.577298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.578311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.579970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.581410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.582227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.582625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.583050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.584790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.586048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.586104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.587587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.588478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.588885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.588946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.590592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.590886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.592029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.593342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.593394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.593783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.594307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.595146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.595202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.596611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.596940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.597990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.598408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.598462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.598850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.599380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.601088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.601158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.602691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.603076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.604120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.604535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.604586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.604976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.605443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.606685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.606742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.607134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.045 [2024-07-23 04:36:25.607444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.608545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.608949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.608998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.609701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.610174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.611376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.611432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.612572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.612899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.614334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.614397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.615946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.616004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.616846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.616912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.618233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.618291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.618591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.620485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.620550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.621707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.621761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.623522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.623586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.624881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.624935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.625374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.628259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.628331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.629912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.629968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.631562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.631626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.632758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.632813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.633220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.635375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.635440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.636022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.636077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.637844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.637908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.638334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.638388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.638723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.641254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.641320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.642238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.642290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.643763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.643828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.644229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.644278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.644694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.646205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.646274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.647807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.647871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.648671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.648734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.649126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.649184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.649480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.651539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.651604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.652763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.652817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.653686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.653761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.654164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.654216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.654513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.657153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.657225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.658765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.658820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.659687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.659750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.660273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.660342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.660634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.663045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.663110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.663544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.663595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.664466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.664535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.664932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.664986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.046 [2024-07-23 04:36:25.665377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.667023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.667088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.667493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.667549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.667572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.667988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.668515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.668577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.668971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.669024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.669358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.670442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.670851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.670907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.671312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.671683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.671857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.672268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.672669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.672724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.673101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.674300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.674361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.674407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.674453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.674772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.675296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.675361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.675408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.675454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.675871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.677009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.677069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.677114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.677185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.677586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.677758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.677812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.677858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.677905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.678207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.679357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.679428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.679474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.679520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.679886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.680054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.680110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.680166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.680216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.680606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.681718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.681783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.681829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.681874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.682247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.682420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.682475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.682522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.682568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.682889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.684129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.684200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.684245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.684291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.684710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.684876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.684932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.684995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.685041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.685434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.686678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.686737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.686782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.686828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.687123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.687305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.687363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.687408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.687459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.687779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.688836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.688896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.688940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.688985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.689350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.047 [2024-07-23 04:36:25.689526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.689608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.689656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.689717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.690085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.691359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.691429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.691489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.691537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.691869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.692039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.692095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.692151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.692198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.692570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.693666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.693725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.693770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.693816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.694181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.694362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.694419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.694470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.694515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.694824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.695976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.696048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.696105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.696160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.696542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.696706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.696776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.696823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.696882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.697306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.698343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.698409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.698455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.698501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.698844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.699009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.699064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.699110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.699163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.699459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.700676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.700735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.700780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.700824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.701148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.701318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.701374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.701421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.701466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.048 [2024-07-23 04:36:25.701760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.850053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.851594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.859328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.860570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.861831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.863365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.865926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.867482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.868243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.868639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.870766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.872229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.873547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.875060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.878034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.879515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.879910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.880311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.881983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.883297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.884823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.885937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.888793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.889305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.889702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.890104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.892239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.893897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.895616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.896413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.898802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.899218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.899612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.900914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.902649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.904213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.904938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.906638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.908322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.908727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.333 [2024-07-23 04:36:25.909389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.910831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.912836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.913640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.915342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.916858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.918604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.919228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.920501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.921876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.923730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.924846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.926098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.927390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.929174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.930780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.930842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.932430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.934410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.935076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.936343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.937681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.937713] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.938015] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.939085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.939503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.939556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.941240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.942919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.943417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.943474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.944751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.334 [2024-07-23 04:36:25.945056] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.946133] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.946554] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.946615] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.947619] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.948110] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.948913] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.948982] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.950554] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.950882] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.952043] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.952469] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.952535] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.953680] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.954151] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.955153] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.955218] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.956382] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.956732] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.958358] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.960040] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.960111] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.961756] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.962305] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.963613] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.963679] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.964961] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.965381] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.966509] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.967700] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.967765] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.968290] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.968771] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.970431] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.970504] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.970904] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.971284] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.972337] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.973336] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.973401] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.975034] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.975545] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.976025] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.976089] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.976497] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.976858] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.977877] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.978909] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.978972] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.979889] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.980370] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.980786] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.980855] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.981262] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.981572] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.982704] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.983803] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.983869] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.985205] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.985724] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.986137] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.334 [2024-07-23 04:36:25.986207] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:25.987685] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:25.988068] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:25.989091] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:25.990595] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:25.990667] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:25.991074] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:25.991557] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:25.992486] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:25.992549] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:25.993462] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:25.993774] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:25.994856] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:25.995524] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:25.995596] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:25.996004] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:25.996547] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:25.997820] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:25.997892] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:25.999394] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:25.999754] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.000803] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.001228] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.001289] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.001687] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.002155] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.003488] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.003552] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.004000] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.004319] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.005416] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.005831] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.005891] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.007040] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.007562] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.008478] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.008543] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.010220] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.010564] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.011804] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.012378] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.012440] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.013469] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.013930] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.015033] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.015097] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.016007] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.016333] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.017515] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.019249] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.019316] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.020806] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.021358] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.022535] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.022601] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.024037] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.024447] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.025456] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.026621] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.026686] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.027540] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.028036] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.029210] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.029274] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.029908] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.030368] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.031623] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.032042] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.032106] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.032523] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.033110] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.033538] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.033603] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.034004] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.034441] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.035607] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.036024] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.036089] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.036500] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.037090] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.037521] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.037587] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.037999] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.038412] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.039530] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.039945] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.040009] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.040417] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.040958] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.041387] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.335 [2024-07-23 04:36:26.041454] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.041857] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.042236] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.043254] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.043671] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.043741] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.044150] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.044620] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.045036] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.045119] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.045532] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.045939] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.047086] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.047528] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.047598] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.048017] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.048581] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.049000] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.049065] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.049485] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.049918] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.051292] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.051714] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.051777] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.052195] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.052776] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.053196] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.053258] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.053663] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.054013] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.055099] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.055522] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.055588] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.055989] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.336 [2024-07-23 04:36:26.056563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.056968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.057020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.057424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.057753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.059207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.059278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.059670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.059734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.060324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.060729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.060778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.061183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.061571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.062800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.062860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.062905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.062951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.063419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.064586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.064641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.065461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.065766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.066837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.066898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.066944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.066991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.067458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.067519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.067566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.067612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.067908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.069011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.069073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.069118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.069171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.069650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.069707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.069753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.069798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.070176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.071738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.071797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.071842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.071888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.072403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.072462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.072508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.072558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.072942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.073979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.074044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.074091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.074137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.074697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.336 [2024-07-23 04:36:26.074764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.074811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.074857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.075219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.076252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.076311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.076357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.076403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.076850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.076913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.076967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.077026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.077329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.078444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.078503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.078549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.078594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.079041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.079117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.079182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.079230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.079530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.080636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.080694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.080743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.080787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.081410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.081467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.081513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.081559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.081938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.082973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.083642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.083695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.085111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.085573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.085637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.085683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.086731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.087070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.088084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.088975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.089031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.090442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.090947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.091017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.092547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.092607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.092931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.094076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.094638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.094696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.096113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.097270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.097338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.098754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.098808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.099177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.100887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.102537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.102611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.104252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.106396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.106464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.107837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.107888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.108199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.109249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.109657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.109707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.111096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.111985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.112056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.113677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.337 [2024-07-23 04:36:26.113736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.598 [2024-07-23 04:36:26.114041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.598 [2024-07-23 04:36:26.115179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.598 [2024-07-23 04:36:26.116537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.598 [2024-07-23 04:36:26.116592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.598 [2024-07-23 04:36:26.116984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.598 [2024-07-23 04:36:26.118862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.598 [2024-07-23 04:36:26.118926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.598 [2024-07-23 04:36:26.119569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.598 [2024-07-23 04:36:26.119620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.598 [2024-07-23 04:36:26.119934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.598 [2024-07-23 04:36:26.121082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.598 [2024-07-23 04:36:26.121974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.598 [2024-07-23 04:36:26.122026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.598 [2024-07-23 04:36:26.123229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.598 [2024-07-23 04:36:26.124318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.124382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.125804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.125857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.126235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.127281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.127908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.127961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.128994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.130442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.130504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.130965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.131019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.131329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.132314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.133756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.133812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.134585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.135392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.135457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.136962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.137026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.137408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.138423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.138929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.138985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.140408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.142095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.142164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.142559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.142609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.142908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.145948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.147618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.147681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.149155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.150669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.150732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.151851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.151903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.152295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.153336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.153767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.153821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.155103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.156657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.156720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.157159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.157209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.157509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.158670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.160064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.160117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.161334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.162388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.162451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.163552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.163609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.163913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.167169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.168716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.168769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.169361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.171443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.171512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.173085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.173137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.173445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.177056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.178717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.178777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.180462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.182438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.182510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.183396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.183447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.183773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.187217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.187764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.187817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.188926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.190693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.190756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.192052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.192103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.192410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.195943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.197550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.197623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.198696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.200205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.200267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.599 [2024-07-23 04:36:26.200803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.200854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.201163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.205008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.206421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.206475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.207841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.209680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.209743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.211277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.211334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.211774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.215987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.217618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.217687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.219204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.220906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.220968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.222282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.222335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.222633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.224841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.225497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.225551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.226817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.228806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.228873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.229807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.229857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.230166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.234163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.234570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.234621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.235673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.237404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.237466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.238983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.239036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.239401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.243467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.243884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.243936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.243983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.245876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.245945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.247423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.247477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.247777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.253249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.253312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.254449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.254499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.255308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.255372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.256813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.256863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.257234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.261293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.262406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.264158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.264221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.265752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.265803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.266122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.269979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.271296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.272548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.273841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.275018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.275084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.276552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.276793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.280576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.281511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.282245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.283520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.283978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.285552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.286570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.288038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.288388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.291936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.293161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.293602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.295016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.297090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.298717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.299629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.300851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.301197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.303714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.304133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.305799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.306212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.307879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.309175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.310702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.311292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.311595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.314094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.315322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.315939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.317221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.318744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.320014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.321306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.322837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.323213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.327576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.328690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.329476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.330542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.332307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.333756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.335334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.336811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.337130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.342620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.343031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.344656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.345059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.346557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.347268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.348991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.350176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.350532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.356981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.358268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.358820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.360480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.361503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.362913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.363364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.364762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.365150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.369736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.370319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.371677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.371730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.373317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.373811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.375183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.376701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.600 [2024-07-23 04:36:26.377069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.380813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.380878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.381831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.381882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.382350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.383275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.384803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.384858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.385204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.390670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.390731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.391273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.391325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.392121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.392194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.393603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.393677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.393982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.397839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.397905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.399266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.399317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.400813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.400876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.402048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.402103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.402469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.405938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.406002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.407242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.407292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.409266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.409336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.410894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.410945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.411292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.414580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.414646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.415820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.415874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.417492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.417557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.418717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.418771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.419158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.424916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.424987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.426629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.426694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.428549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.428616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.429959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.430010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.430324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.433673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.433739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.434547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.434599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.436246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.436310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.436910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.436962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.437276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.862 [2024-07-23 04:36:26.441017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.441087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.441737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.441796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.443463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.443526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.445167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.445225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.445635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.450398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.450477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.452095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.452152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.453415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.453479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.454217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.454266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.454573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.458168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.458234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.459393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.459448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.461637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.461700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.462107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.462164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.462468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.464641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.464707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.466095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.466150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.467797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.467859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.468636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.468689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.469066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.470872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.470937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.472049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.472100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.472908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.472983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.474567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.474618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.474980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.480789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.480858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.481258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.481310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.482105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.482182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.482582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.482643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.483062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.487983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.488048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.488463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.488518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.489320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.489386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.490861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.490910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.491268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.495569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.495643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.496037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.496085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.496972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.497037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.498442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.498493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.498927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.502873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.502939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.503595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.503645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.504492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.504555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.504951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.505004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.505375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.508901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.508971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.509376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.509448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.510859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.510921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.512065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.512117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.512529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.515092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.515164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.863 [2024-07-23 04:36:26.516112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.516167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.517032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.517102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.517675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.517730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.518034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.520592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.520658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.522177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.522227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.524048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.524109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.524509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.524571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.524979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.527114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.527192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.527586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.527640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.529653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.529722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.530115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.530174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.530477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.532801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.532865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.534488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.534545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.535456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.535521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.535917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.535970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.536291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.539278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.539804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.539859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.540989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.541797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.541859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.543546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.543596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.543979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.547109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.548713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.549502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.549557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.550996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.551058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.551667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.551718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.552046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.556156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.556248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.556298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.556342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.557347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.557411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.558395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.558444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.558774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.561576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.561637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.561687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.561732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.563685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.563748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.563794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.563840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.564148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.567555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.567615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.567661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.567707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.568198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.568262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.568308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.568369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.568668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.571449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.571523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.571569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.571614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.572124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.572190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.572236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.572296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.572649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.576539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.576598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.576643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.576693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.577261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.577319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.577374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.577419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.577719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.581708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.581768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.581815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.581861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.582321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.582385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.582432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.582477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.582791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.586060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.586121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.586172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.586219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.586663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.586749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.586796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.586847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.587152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.590027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.590086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.590131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.590183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.590629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.590697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.590749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.590798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.591098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.594492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.594562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.594611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.595006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.595469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.595528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.595574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.595620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.595925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.598913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.599855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.599915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.601587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.603817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.603883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.603929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.604353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.604662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.608724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.609661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.609716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.610421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.610874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.611349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.611401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.612883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.613276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.864 [2024-07-23 04:36:26.616136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.617594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.617647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.618198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.618654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.619335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.619389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.620368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.620676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.624071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.624503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.624556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.626166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.626764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.628499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.628559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.629928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.630306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.633505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.634268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.634322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.635215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.635671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.636601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.636658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.637722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.638030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.641312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.865 [2024-07-23 04:36:26.642991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.125 [2024-07-23 04:36:26.643044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.125 [2024-07-23 04:36:26.643439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.125 [2024-07-23 04:36:26.643892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.125 [2024-07-23 04:36:26.645468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.125 [2024-07-23 04:36:26.645534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.125 [2024-07-23 04:36:26.647133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.125 [2024-07-23 04:36:26.647487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.125 [2024-07-23 04:36:26.651128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.125 [2024-07-23 04:36:26.651549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.125 [2024-07-23 04:36:26.651610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.125 [2024-07-23 04:36:26.653228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.125 [2024-07-23 04:36:26.653790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.125 [2024-07-23 04:36:26.655321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.655385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.657012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.657368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.660835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.662437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.662491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.662890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.663361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.665006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.665070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.665476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.665784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.669398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.669946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.669997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.671442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.671934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.673339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.673394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.675021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.675389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.679122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.680773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.680837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.681236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.681687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.682094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.682155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.683440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.683748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.687405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.688723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.688776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.690306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.690808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.691833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.691885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.692916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.693269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.696413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.697186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.697243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.698720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.699178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.700752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.700805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.702078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.702391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.704513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.705965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.706019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.707498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.707951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.708847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.708906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.710170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.710521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.715031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.716370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.716422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.717003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.717470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.718789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.718843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.720376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.720751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.724628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.725651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.725706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.726333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.726788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.727206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.727259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.728807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.729144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.731957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.733269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.733322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.734838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.735326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.736942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.736994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.737446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.737756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.740835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.742157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.742209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.743592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.744070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.745401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.745454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.747068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.747492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.750653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.126 [2024-07-23 04:36:26.751927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.751981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.753265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.753721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.754311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.754366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.755690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.755996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.759785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.761271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.761327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.761723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.762184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.763785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.763846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.765479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.765800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.769548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.770932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.770987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.771392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.771863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.772277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.772338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.773886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.774208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.777400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.778732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.778787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.780323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.780839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.782152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.782205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.782907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.783224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.786207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.786272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.787115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.787172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.787623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.789194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.789252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.790777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.791086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.795679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.795743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.795790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.797294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.797772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.799489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.799542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.800531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.800897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.804277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.804685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.806147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.806540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.806996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.808550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.808607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.810128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.810442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.814569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.816159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.816553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.818156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.818711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.820265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.821876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.823569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.823883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.828004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.829628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.830030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.831598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.833662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.835045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.836212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.836922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.837244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.840930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.842171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.842573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.844264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.845112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.846777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.848148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.848753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.849063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.852357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.852768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.854407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.855976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.857610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.858344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.859464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.859859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.860178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.865055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.866083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.127 [2024-07-23 04:36:26.866930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.867931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.869922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.871555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.871959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.873568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.873879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.876873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.877984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.879594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.880295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.882409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.883290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.884259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.885096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.885417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.889758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.891226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.892253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.893079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.894352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.895252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.896223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.897705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.898082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.902083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.902786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.903778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.903832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.905443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.128 [2024-07-23 04:36:26.906767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.389 [2024-07-23 04:36:26.907652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.389 [2024-07-23 04:36:26.908712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.389 [2024-07-23 04:36:26.909026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.389 [2024-07-23 04:36:26.912497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.389 [2024-07-23 04:36:26.912566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.389 [2024-07-23 04:36:26.913098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.389 [2024-07-23 04:36:26.913158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.389 [2024-07-23 04:36:26.913673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.389 [2024-07-23 04:36:26.915025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.389 [2024-07-23 04:36:26.916220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.389 [2024-07-23 04:36:26.916276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.389 [2024-07-23 04:36:26.916694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.389 [2024-07-23 04:36:26.921312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.389 [2024-07-23 04:36:26.921380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.389 [2024-07-23 04:36:26.923059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.923116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.924063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.924129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.925311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.925360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.925667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.929643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.929708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.930625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.930680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.932529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.932595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.932991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.933050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.933363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.937567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.937631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.939058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.939113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.940758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.940822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.941763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.941816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.942206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.946431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.946496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.947384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.947450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.948261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.948327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.949750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.949846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.950171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.954583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.954651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.955823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.955875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.957726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.957791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.958197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.958247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.958583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.961483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.961548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.963272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.963324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.965526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.965590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.967038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.967093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.967409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.971883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.971960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.972368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.972418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.973232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.973295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.974733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.974792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.975100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.979193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.979256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.980108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.980164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.981012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.981085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.981490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.981558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.981959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.985505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.985572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.985970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.986024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.986921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.986985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.988077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.988127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.988446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.993319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.993383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.993776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.993830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.994715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.994781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.996484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.996536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:26.996923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:27.000419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:27.000486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:27.000883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:27.000937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:27.001773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.390 [2024-07-23 04:36:27.001839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.002256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.002310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.002634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.004867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.004935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.005337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.005393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.006224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.006289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.006682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.006735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.007043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.009402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.009468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.009862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.009916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.010768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.010833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.011236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.011297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.011651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.013979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.014046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.014450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.014506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.015431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.015501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.015895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.015955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.016319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.018574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.018653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.019051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.019102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.020684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.020749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.021152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.021209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.021519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.026102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.026177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.027835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.027900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.029393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.029456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.030769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.030823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.031231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.034503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.034568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.035481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.035536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.036369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.036435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.036839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.036893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.037208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.041406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.041476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.041868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.041922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.043962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.044035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.045520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.045577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.045933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.048893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.048961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.050390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.050457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.051464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.051529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.052695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.052748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.053082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.056486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.056550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.057912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.057967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.058982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.059047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.060184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.060233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.060541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.064478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.065756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.065812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.067125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.067984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.068046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.069692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.069755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.391 [2024-07-23 04:36:27.070180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.073341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.074861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.075272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.075325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.076129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.076203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.077515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.077571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.077874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.081312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.081377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.081424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.081469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.083103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.083174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.083571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.083635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.083940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.087162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.087222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.087267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.087313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.088538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.088611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.088658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.088714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.089018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.090188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.090254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.090300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.090345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.090844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.090905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.090952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.090998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.091333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.092452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.092519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.092586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.092645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.093095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.093162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.093220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.093268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.093656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.094848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.094918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.094965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.095017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.095527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.095604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.095651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.095702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.096006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.097132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.097202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.097253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.097298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.097816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.097886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.097932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.097977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.098317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.099435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.099503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.099551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.099602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.100051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.100108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.100163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.100213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.100646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.102032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.102091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.102136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.102190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.102671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.102728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.102774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.102819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.103247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.104318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.104379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.104426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.104947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.105496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.105560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.105606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.105654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.105986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.107118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.392 [2024-07-23 04:36:27.108168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.108221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.108611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.110641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.110713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.110759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.112003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.112358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.113443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.114820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.114889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.116119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.116641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.117978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.118033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.118825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.119234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.120406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.121714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.121768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.123133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.123593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.125199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.125269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.126794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.127103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.128286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.128692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.128742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.130393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.130848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.132565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.132617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.134236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.134546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.135690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.137098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.137159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.137557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.138054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.139174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.139229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.140494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.140828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.141982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.143289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.143344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.144618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.145111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.145982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.146036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.147319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.147663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.148727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.150028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.150082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.151393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.151882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.152617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.152672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.153947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.154289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.155367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.156597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.156649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.157474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.157928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.158811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.158866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.160150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.160486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.161523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.162856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.162910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.164202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.164656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.165973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.166026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.166428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.393 [2024-07-23 04:36:27.166863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.170736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.171572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.171626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.172892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.173377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.174697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.174752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.175485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.175857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.178814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.180451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.180504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.181809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.182295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.183587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.183641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.184955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.185312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.186383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.187662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.187717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.189013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.189498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.190380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.190435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.191797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.192103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.193116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.193531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.193583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.193983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.194445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.196037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.196097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.197771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.198082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.199182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.200450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.200509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.201798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.202285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.202688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.202738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.203371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.203691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.204704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.205825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.205880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.207484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.207938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.209611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.209663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.211239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.211633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.212691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.214033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.214087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.215481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.215993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.216853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.216909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.218175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.655 [2024-07-23 04:36:27.218508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.219523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.219928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.219978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.220419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.220875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.222208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.222263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.223540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.223847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.224897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.226412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.226474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.228031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.228570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.228976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.229027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.230184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.230521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.231628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.232357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.232412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.233669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.234123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.235430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.235484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.236591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.237041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.238280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.239679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.239736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.240232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.240691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.242299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.242371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.242763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.243116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.244169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.245201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.245259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.246840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.247367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.247917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.247974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.248372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.248762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.249799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.249873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.250829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.250883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.251397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.252361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.252420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.254016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.254495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.257352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.257423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.257481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.259182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.259698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.260718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.260776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.262090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.262452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.263473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.264397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.265683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.266814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.267277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.268564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.268621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.269953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.270312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.273973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.275676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.276937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.277726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.278192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.279037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.280042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.280449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.280774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.285130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.285544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.285951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.287404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.288213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.289640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.291173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.291576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.291890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.294441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.656 [2024-07-23 04:36:27.294942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.296309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.297890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.298760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.299178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.300679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.302075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.302443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.303840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.304255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.304750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.306103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.306998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.308320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.309987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.310401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.310745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.313423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.314029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.315287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.317007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.317893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.318415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.319745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.321355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.321744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.323108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.323523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.324200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.325362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.326437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.327564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.329131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.329532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.329910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.331545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.332666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.333077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.333479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.335025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.336498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.337173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.338338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.338668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.340096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.341580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.343182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.343252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.344442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.345737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.347024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.348286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.348738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.351388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.351454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.352465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.352517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.353055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.354613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.355534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.355597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.356032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.357779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.357845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.359503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.359553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.360449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.360517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.360910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.360969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.361433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.363081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.363153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.363551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.363605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.364472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.364535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.364933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.364981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.365368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.366822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.366887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.367290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.367347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.368182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.368246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.368637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.368684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.369023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.370573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.370637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.371029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.371089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.657 [2024-07-23 04:36:27.371960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.372027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.372428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.372478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.372839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.375175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.375242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.375637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.375691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.376586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.376650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.377043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.377098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.377440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.378898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.378972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.379383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.379439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.380325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.380388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.380784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.380843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.381214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.382795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.382866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.383278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.383340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.384212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.384289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.384680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.384741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.385071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.388170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.388244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.389821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.389880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.390717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.390781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.391936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.391992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.392305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.394400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.394463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.395380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.395434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.397400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.397470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.398710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.398764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.399183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.401632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.401702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.403321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.403381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.404729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.404793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.405963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.406018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.406464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.408566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.408631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.409203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.409258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.411078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.411153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.411552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.411614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.411972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.414403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.414468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.415367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.415421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.416883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.416958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.417359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.417409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.417800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.419278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.419360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.420864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.420925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.421725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.421788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.422188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.422237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.422585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.424682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.424747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.425920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.425975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.426856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.658 [2024-07-23 04:36:27.426919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.659 [2024-07-23 04:36:27.427325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.659 [2024-07-23 04:36:27.427381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.659 [2024-07-23 04:36:27.427686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.659 [2024-07-23 04:36:27.429917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.659 [2024-07-23 04:36:27.429987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.659 [2024-07-23 04:36:27.431117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.659 [2024-07-23 04:36:27.431173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.659 [2024-07-23 04:36:27.432129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.659 [2024-07-23 04:36:27.432202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.659 [2024-07-23 04:36:27.433544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.659 [2024-07-23 04:36:27.433598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.659 [2024-07-23 04:36:27.433921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.919 [2024-07-23 04:36:27.436469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.919 [2024-07-23 04:36:27.436541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.919 [2024-07-23 04:36:27.436932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.919 [2024-07-23 04:36:27.436980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.919 [2024-07-23 04:36:27.437991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.919 [2024-07-23 04:36:27.438056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.919 [2024-07-23 04:36:27.439230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.919 [2024-07-23 04:36:27.439284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.919 [2024-07-23 04:36:27.439600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.441455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.441520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.441910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.441959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.444023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.444093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.445690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.445750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.446124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.447495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.447560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.447952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.448000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.449629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.449698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.450377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.450439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.450744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.452203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.452266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.452734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.452786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.454666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.454730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.455674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.455729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.456058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.457700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.457773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.459304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.459366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.460171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.460236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.461703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.461765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.462071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.464214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.465131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.465193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.466717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.468762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.468830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.470157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.470212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.470522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.471620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.472380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.473453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.473510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.475010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.475076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.475475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.475524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.475958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.478565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.478629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.478675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.478728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.480593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.480658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.482205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.482257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.482625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.483988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.484058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.484119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.484177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.485057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.485121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.485178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.485225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.485657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.486745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.486820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.486868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.486918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.487418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.487477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.487530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.487586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.487890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.488961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.489036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.489082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.489129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.489591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.489652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.489698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.489744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.490064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.491092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.920 [2024-07-23 04:36:27.491159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.491207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.491258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.491739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.491795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.491841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.491886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.492201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.493291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.493351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.493398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.493458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.494014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.494079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.494130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.494185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.494518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.495539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.495599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.495645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.495699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.496188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.496246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.496291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.496343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.496645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.497670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.497732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.497778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.497833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.498406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.498465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.498511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.498557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.498869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.499927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.499985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.500031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.501589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.502124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.502191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.502237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.502283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.502616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.503674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.504088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.504147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.504537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.506220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.506283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.506329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.507595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.507904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.508973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.510279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.510334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.511852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.512337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.512743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.512795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.513195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.513502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.514515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.515863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.515915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.517296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.517771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.519095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.519155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.520776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.521160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.522180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.523456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.523510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.524795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.525261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.525766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.525819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.527096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.527413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.528425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.528831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.528882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.529279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.529734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.531372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.531432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.532967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.533281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.534331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.535628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.921 [2024-07-23 04:36:27.535683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.537214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.537729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.538132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.538193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.539099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.539452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.540471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.541077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.541132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.542484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.542938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.544502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.544556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.545613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.546051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.547212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.548895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.548955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.550659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.551114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.552315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.552368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.553635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.553989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.555100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.555516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.555567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.556516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.556994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.558315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.558369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.559899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.560270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.561280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.562844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.562899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.563964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.564538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.564944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.564994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.566655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.566963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.567972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.569100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.569162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.570413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.570899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.572473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.572528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.572929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.573274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.574294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.575630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.575684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.577212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.577713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.579292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.579349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.580846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.581162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.582205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.582612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.582662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.584174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.584694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.585338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.585397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.586871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.587199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.588355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.589095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.589156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.590069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.590529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.591670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.591728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.592644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.592961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.594178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.595924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.595975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.597367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.597892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.599172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.599230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.600813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.601172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.602182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.603102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.603165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.604245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.604701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.922 [2024-07-23 04:36:27.605709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.605766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.606393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.606806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.608097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.609629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.609691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.611308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.611821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.612756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.612812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.614357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.614726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.616103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.617561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.617617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.618985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.619544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.621190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.621244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.622945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.623291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.624299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.625474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.625530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.626675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.627170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.628351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.628407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.629097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.629523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.630898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.632360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.632419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.632815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.633283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.634828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.634883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.635280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.635661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.636667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.637662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.637715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.639040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.639531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.640046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.640099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.640498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.640844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.641837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.641903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.642564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.642619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.643103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.644097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.644171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.644559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.644995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.646730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.646801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.646847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.648451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.648909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.649340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.649395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.649783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.650119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.651110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.652207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.653387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.654400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.654976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.655392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.655443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.656868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.657244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.659614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.660049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.660450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.660845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.661312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.662914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.663330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.664880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.665202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.666664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.668030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.668441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.670023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.670898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.671348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.672756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.674163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.923 [2024-07-23 04:36:27.674547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.924 [2024-07-23 04:36:27.675901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.924 [2024-07-23 04:36:27.676313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.924 [2024-07-23 04:36:27.676709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.924 [2024-07-23 04:36:27.678354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.924 [2024-07-23 04:36:27.680425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.924 [2024-07-23 04:36:27.680919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.924 [2024-07-23 04:36:27.682283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.924 [2024-07-23 04:36:27.683808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.924 [2024-07-23 04:36:27.684117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.924 [2024-07-23 04:36:27.685516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.924 [2024-07-23 04:36:27.687049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.924 [2024-07-23 04:36:27.687464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.924 [2024-07-23 04:36:27.689208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.924 [2024-07-23 04:36:27.690050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.924 [2024-07-23 04:36:27.690727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.924 [2024-07-23 04:36:27.691986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.924 [2024-07-23 04:36:27.693344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.924 [2024-07-23 04:36:27.693654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.924 [2024-07-23 04:36:27.695891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.924 [2024-07-23 04:36:27.697179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.924 [2024-07-23 04:36:27.698708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.924 [2024-07-23 04:36:27.699499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.924 [2024-07-23 04:36:27.700441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.701765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.702975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.703680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.704034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.705476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.705879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.706289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.706687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.707589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.707996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.708405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.708803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.709194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.710679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.711088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.711494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.711891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.712787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.713200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.713599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.713998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.714409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.716060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.716482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.716884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.716937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.186 [2024-07-23 04:36:27.717839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.718258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.718657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.719052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.719448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.720960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.721036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.721435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.721484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.722031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.722451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.722854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.722906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.723354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.724830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.724894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.725293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.725343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.726216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.726286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.726678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.726726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.727125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.728517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.728586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.729851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.729902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.730822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.730885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.732161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.732214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.732579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.735054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.735120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.735522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.735572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.736575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.736640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.737810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.737863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.738193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.740088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.740159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.740551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.740599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.742621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.742691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.744204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.744276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.744611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.745973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.746037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.746437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.746485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.748116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.748186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.749011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.749061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.749392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.750844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.750910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.751316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.751370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.752246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.752315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.753700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.753755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.754116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.756488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.756555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.758117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.758175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.759495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.759559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.760678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.760733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.761169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.763358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.763423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.763912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.763966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.765845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.765915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.766326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.766377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.766736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.769063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.769129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.770431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.770482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.771863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.771933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.772333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.772407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.772822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.187 [2024-07-23 04:36:27.774321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.774385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.775712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.775765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.776569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.776633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.777027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.777081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.777395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.779427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.779491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.780407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.780463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.781305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.781390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.781782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.781829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.782148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.784678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.784748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.786450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.786511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.787355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.787420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.788236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.788287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.788657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.790538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.790603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.791440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.791501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.793306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.793371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.794280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.794335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.794666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.796475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.796553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.796945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.797000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.799129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.799197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.800665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.800738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.801169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.802509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.802574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.802964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.803017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.804704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.804767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.805939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.805994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.806350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.808701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.808769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.809175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.809230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.810238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.810302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.811278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.811332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.811638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.813778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.813842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.814249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.814302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.815458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.815522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.817056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.817109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.817469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.818980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.819046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.819449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.819505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.821193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.821271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.821919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.821972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.822310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.823783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.823860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.825548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.825608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.827678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.827753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.828153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.828207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.828547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.830982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.831052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.831454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.831504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.832531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.832595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.833931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.833983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.834330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.188 [2024-07-23 04:36:27.836228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.836636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.836688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.837126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.839108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.839181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.840584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.840637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.840943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.841990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.843703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.845329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.845394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.846303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.846366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.847706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.847759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.848087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.849834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.849898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.849945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.849991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.851774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.851838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.853153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.853205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.853539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.854667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.854726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.854772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.854817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.856545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.856608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.856654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.856701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.857008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.858044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.858103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.858158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.858204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.858664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.858727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.858779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.858824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.859134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.860313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.860372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.860422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.860467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.860919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.860976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.861022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.861067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.861423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.862467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.862526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.862573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.862624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.863119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.863186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.863243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.863289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.863593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.864634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.864694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.864740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.864807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.865383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.865450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.865497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.865541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.865869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.866855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.866914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.866963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.867009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.867490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.867549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.867596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.867642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.867971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.869034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.869101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.869157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.869203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.869757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.869827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.869888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.869936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.870320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.871310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.871372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.871418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.872711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.873179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.873241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.873287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.873333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.873689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.874673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.189 [2024-07-23 04:36:27.875818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.875882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.876280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.878328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.878397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.878447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.880076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.880393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.881424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.882723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.882789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.884073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.884581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.885133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.885196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.885588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.885933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.886922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.888238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.888292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.889451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.889907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.891598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.891659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.893323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.893633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.894689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.895852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.895906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.897193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.897689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.899016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.899070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.899779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.900108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.901121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.902206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.902270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.902660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.903265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.904901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.904954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.906531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.906844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.907914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.909263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.909318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.910365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.910834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.911253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.911312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.911705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.912014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.913094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.914374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.914429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.915123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.915620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.916913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.916967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.918276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.918583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.919856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.921236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.921292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.922357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.922866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.924501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.924558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.926046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.926393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.927448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.928729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.928787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.930361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.930862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.931801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.931857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.932942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.933391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.934766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.936034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.936090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.936492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.936947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.938648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.938709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.190 [2024-07-23 04:36:27.939102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.939502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.940639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.941662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.941715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.943217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.943741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.944356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.944413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.944808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.945180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.946237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.946828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.946884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.948164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.948640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.949049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.949105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.949506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.949814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.950981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.952579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.952663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.954195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.954700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.955104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.955162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.955911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.956275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.957387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.958312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.958369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.959178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.959765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.960178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.960230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.961825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.962202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.963361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.964841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.964916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.965320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.191 [2024-07-23 04:36:27.965810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.966846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.966901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.967806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.968159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.969294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.969737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.969793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.970195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.970740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.972145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.972208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.973908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.974290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.975377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.975790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.975846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.976250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.976708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.977681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.977738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.978397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.978707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.979922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.980340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.980397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.980967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.981433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.982977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.983044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.451 [2024-07-23 04:36:27.984038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:27.984419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:27.985519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:27.985931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:27.985986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:27.987509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:27.988024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:27.988616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:27.988674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:27.990108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:27.990425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:27.991591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:27.992338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:27.992392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:27.993321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:27.993779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:27.994992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:27.995046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:27.995970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:27.996315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:27.997583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:27.999336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:27.999391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.000383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.000875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.002288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.002351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.003690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.004047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.005225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.005298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.007018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.007067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.007555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.009116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.009199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.009592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.009920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.012201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.012265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.012311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.013050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.013512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.015193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.015252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.016966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.017323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.018464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.020017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.021392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.021859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.022336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.023890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.023949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.024353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.024665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.027018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.028315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.029447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.030998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.031487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.033008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.034403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.034849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.035203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.037860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.038374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.039207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.040730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.041595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.042005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.043196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.043857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.044194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.045670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.046081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.046493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.046893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.047836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.048259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.048657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.049056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.049415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.050946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.051377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.051786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.052195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.053052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.053469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.053864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.054279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.054661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.056188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.452 [2024-07-23 04:36:28.056597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.057014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.057420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.058325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.058734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.059131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.059535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.059851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.061440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.061849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.062252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.062649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.063536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.063943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.064366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.064767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.065196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.067516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.068403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.069633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.070810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.071666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.072083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.073050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.074229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.074547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.076391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.076799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.077218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.077272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.078907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.079565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.080983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.082176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.082540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.085354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.085426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.086927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.086988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.087489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.088679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.090091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.090149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.090536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.092477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.092542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.093570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.093625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.094488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.094552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.094960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.095014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.095331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.097976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.098048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.099674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.099742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.100597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.100663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.101315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.101367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.101700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.103617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.103682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.104771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.104825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.105753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.105818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.107160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.107211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.107573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.109730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.109795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.110200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.110255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.111096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.111171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.112519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.112592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.112901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.115159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.115223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.115629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.115683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.117530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.117595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.118606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.118660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.119056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.120484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.120549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.120942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.453 [2024-07-23 04:36:28.120995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.122529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.122594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.123990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.124044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.124356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.125765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.125831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.126247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.126302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.127774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.127838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.128358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.128413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.128719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.130186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.130251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.130897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.130948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.132952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.133018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.134086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.134136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.134509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.136048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.136116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.137706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.137762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.138791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.138856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.140269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.140328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.140642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.142456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.142523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.143446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.143500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.144974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.145037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.146694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.146751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.147108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.148598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.148663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.149056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.149108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.150964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.151029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.151827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.151881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.152240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.153933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.153998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.155739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.155788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.157923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.157992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.158405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.158469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.158853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.161565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.161630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.163044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.163100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.164760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.164823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.166114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.166170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.166596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.168419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.168486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.168881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.168935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.169877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.169941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.170986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.171040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.171381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.174024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.174090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.175616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.175668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.176480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.176544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.176939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.176993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.177362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.179154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.179227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.180615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.180669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.181484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.181550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.454 [2024-07-23 04:36:28.183224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.183273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.183580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.185569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.185635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.186818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.186869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.188857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.188920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.189436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.189488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.189821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.191807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.191885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.192285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.192345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.194489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.194557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.196268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.196326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.196634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.198924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.198989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.200248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.200298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.201095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.201173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.201571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.201625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.201930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.204514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.204580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.205249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.205303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.207320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.207384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.208908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.208959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.209272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.455 [2024-07-23 04:36:28.210553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:20.022 00:41:20.022 Latency(us) 00:41:20.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:20.022 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:41:20.022 Verification LBA range: start 0x0 length 0x100 00:41:20.022 crypto_ram : 5.83 43.88 2.74 0.00 0.00 2842396.26 71303.17 2724619.88 00:41:20.022 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:41:20.022 Verification LBA range: start 0x100 length 0x100 00:41:20.022 crypto_ram : 5.80 44.16 2.76 0.00 0.00 2820461.36 75916.90 2657511.01 00:41:20.022 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:41:20.022 Verification LBA range: start 0x0 length 0x100 00:41:20.022 crypto_ram2 : 5.84 43.87 2.74 0.00 0.00 2740115.87 71303.17 2724619.88 00:41:20.022 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:41:20.022 Verification LBA range: start 0x100 length 0x100 00:41:20.022 crypto_ram2 : 5.80 44.15 2.76 0.00 0.00 2717668.15 75497.47 2617245.70 00:41:20.022 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:41:20.022 Verification LBA range: start 0x0 length 0x100 00:41:20.022 crypto_ram3 : 5.59 272.30 17.02 0.00 0.00 422088.81 64172.85 583847.12 00:41:20.022 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:41:20.022 Verification LBA range: start 0x100 length 0x100 00:41:20.022 crypto_ram3 : 5.58 283.58 17.72 0.00 0.00 404441.07 10328.47 583847.12 00:41:20.022 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:41:20.022 Verification LBA range: start 0x0 length 0x100 00:41:20.022 crypto_ram4 : 5.67 286.77 17.92 0.00 0.00 389254.48 15833.50 546937.24 00:41:20.022 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:41:20.022 Verification LBA range: start 0x100 length 0x100 00:41:20.022 crypto_ram4 : 5.65 297.71 18.61 0.00 0.00 374898.62 8283.75 546937.24 00:41:20.022 =================================================================================================================== 00:41:20.022 Total : 1316.41 82.28 0.00 0.00 725211.04 8283.75 2724619.88 00:41:23.309 00:41:23.309 real 0m13.074s 00:41:23.309 user 0m24.214s 00:41:23.309 sys 0m0.585s 00:41:23.309 04:36:31 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:23.309 04:36:31 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:41:23.309 ************************************ 00:41:23.309 END TEST bdev_verify_big_io 00:41:23.309 ************************************ 00:41:23.309 04:36:31 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:41:23.309 04:36:31 blockdev_crypto_aesni -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:23.309 04:36:31 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:41:23.309 04:36:31 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:23.309 04:36:31 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:41:23.309 ************************************ 00:41:23.309 START TEST bdev_write_zeroes 00:41:23.309 ************************************ 00:41:23.309 04:36:31 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:23.309 [2024-07-23 04:36:31.728271] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:41:23.309 [2024-07-23 04:36:31.728387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2909450 ] 00:41:23.309 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.309 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:23.309 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.309 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:23.309 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.309 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:23.309 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:23.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.310 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:23.310 [2024-07-23 04:36:31.953743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:23.569 [2024-07-23 04:36:32.239209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:23.569 [2024-07-23 04:36:32.260976] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:41:23.569 [2024-07-23 04:36:32.268995] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:41:23.570 [2024-07-23 04:36:32.277004] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:41:24.138 [2024-07-23 04:36:32.669225] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:41:27.430 [2024-07-23 04:36:35.558770] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:41:27.430 [2024-07-23 04:36:35.558845] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:41:27.430 [2024-07-23 04:36:35.558864] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:27.430 [2024-07-23 04:36:35.566782] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:41:27.430 [2024-07-23 04:36:35.566820] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:41:27.430 [2024-07-23 04:36:35.566836] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:27.430 [2024-07-23 04:36:35.574823] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:41:27.430 [2024-07-23 04:36:35.574857] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:41:27.430 [2024-07-23 04:36:35.574872] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:27.430 [2024-07-23 04:36:35.582817] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:41:27.430 [2024-07-23 04:36:35.582848] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:41:27.430 [2024-07-23 04:36:35.582863] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:27.430 Running I/O for 1 seconds... 00:41:28.368 00:41:28.368 Latency(us) 00:41:28.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:28.368 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:41:28.368 crypto_ram : 1.03 1857.25 7.25 0.00 0.00 68358.95 6317.67 83047.22 00:41:28.368 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:41:28.368 crypto_ram2 : 1.03 1862.55 7.28 0.00 0.00 67726.86 6160.38 77175.19 00:41:28.368 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:41:28.368 crypto_ram3 : 1.02 14259.11 55.70 0.00 0.00 8821.68 2660.76 11639.19 00:41:28.368 Job: crypto_ram4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:41:28.368 crypto_ram4 : 1.02 14296.99 55.85 0.00 0.00 8768.89 2647.65 9332.33 00:41:28.368 =================================================================================================================== 00:41:28.368 Total : 32275.90 126.08 0.00 0.00 15658.16 2647.65 83047.22 00:41:30.900 00:41:30.900 real 0m7.872s 00:41:30.900 user 0m7.262s 00:41:30.900 sys 0m0.547s 00:41:30.900 04:36:39 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:30.900 04:36:39 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:41:30.900 ************************************ 00:41:30.900 END TEST bdev_write_zeroes 00:41:30.900 ************************************ 00:41:30.900 04:36:39 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:41:30.900 04:36:39 blockdev_crypto_aesni -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:30.900 04:36:39 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:41:30.900 04:36:39 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:30.900 04:36:39 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:41:30.900 ************************************ 00:41:30.900 START TEST bdev_json_nonenclosed 00:41:30.900 ************************************ 00:41:30.900 04:36:39 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:30.900 [2024-07-23 04:36:39.672438] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:41:30.900 [2024-07-23 04:36:39.672552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2910773 ] 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:31.160 [2024-07-23 04:36:39.898814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:31.419 [2024-07-23 04:36:40.163606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:31.419 [2024-07-23 04:36:40.163699] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:41:31.419 [2024-07-23 04:36:40.163725] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:41:31.419 [2024-07-23 04:36:40.163741] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:31.985 00:41:31.985 real 0m1.150s 00:41:31.985 user 0m0.879s 00:41:31.985 sys 0m0.265s 00:41:31.986 04:36:40 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:41:31.986 04:36:40 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:31.986 04:36:40 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:41:31.986 ************************************ 00:41:31.986 END TEST bdev_json_nonenclosed 00:41:31.986 ************************************ 00:41:31.986 04:36:40 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 234 00:41:31.986 04:36:40 blockdev_crypto_aesni -- bdev/blockdev.sh@781 -- # true 00:41:31.986 04:36:40 blockdev_crypto_aesni -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:31.986 04:36:40 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:41:31.986 04:36:40 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:31.986 04:36:40 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:41:32.271 ************************************ 00:41:32.271 START TEST bdev_json_nonarray 00:41:32.271 ************************************ 00:41:32.271 04:36:40 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:32.271 [2024-07-23 04:36:40.908778] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:41:32.271 [2024-07-23 04:36:40.908892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2911054 ] 00:41:32.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.538 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:32.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.538 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:32.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.538 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:32.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.538 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:32.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.538 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:32.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.538 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:32.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.538 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:32.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.538 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:32.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.538 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:32.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.538 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:32.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.538 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:32.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.538 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:32.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.538 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:32.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.538 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:32.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.538 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:32.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.538 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:32.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.538 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:32.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.538 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:32.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.538 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:32.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.538 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:32.538 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.538 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:32.539 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.539 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:32.539 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.539 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:32.539 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.539 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:32.539 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.539 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:32.539 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.539 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:32.539 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.539 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:32.539 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.539 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:32.539 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.539 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:32.539 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.539 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:32.539 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.539 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:32.539 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.539 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:32.539 [2024-07-23 04:36:41.133787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:32.797 [2024-07-23 04:36:41.395578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:32.797 [2024-07-23 04:36:41.395668] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:41:32.797 [2024-07-23 04:36:41.395702] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:41:32.797 [2024-07-23 04:36:41.395718] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:33.364 00:41:33.364 real 0m1.146s 00:41:33.364 user 0m0.881s 00:41:33.364 sys 0m0.259s 00:41:33.364 04:36:41 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:41:33.365 04:36:41 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:33.365 04:36:41 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:41:33.365 ************************************ 00:41:33.365 END TEST bdev_json_nonarray 00:41:33.365 ************************************ 00:41:33.365 04:36:41 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 234 00:41:33.365 04:36:41 blockdev_crypto_aesni -- bdev/blockdev.sh@784 -- # true 00:41:33.365 04:36:41 blockdev_crypto_aesni -- bdev/blockdev.sh@786 -- # [[ crypto_aesni == bdev ]] 00:41:33.365 04:36:41 blockdev_crypto_aesni -- bdev/blockdev.sh@793 -- # [[ crypto_aesni == gpt ]] 00:41:33.365 04:36:41 blockdev_crypto_aesni -- bdev/blockdev.sh@797 -- # [[ crypto_aesni == crypto_sw ]] 00:41:33.365 04:36:41 blockdev_crypto_aesni -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:41:33.365 04:36:41 blockdev_crypto_aesni -- bdev/blockdev.sh@810 -- # cleanup 00:41:33.365 04:36:41 blockdev_crypto_aesni -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:41:33.365 04:36:42 blockdev_crypto_aesni -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:41:33.365 04:36:42 blockdev_crypto_aesni -- bdev/blockdev.sh@26 -- # [[ crypto_aesni == rbd ]] 00:41:33.365 04:36:42 blockdev_crypto_aesni -- bdev/blockdev.sh@30 -- # [[ crypto_aesni == daos ]] 00:41:33.365 04:36:42 blockdev_crypto_aesni -- bdev/blockdev.sh@34 -- # [[ crypto_aesni = \g\p\t ]] 00:41:33.365 04:36:42 blockdev_crypto_aesni -- bdev/blockdev.sh@40 -- # [[ crypto_aesni == xnvme ]] 00:41:33.365 00:41:33.365 real 1m48.957s 00:41:33.365 user 3m45.251s 00:41:33.365 sys 0m11.129s 00:41:33.365 04:36:42 blockdev_crypto_aesni -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:33.365 04:36:42 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:41:33.365 ************************************ 00:41:33.365 END TEST blockdev_crypto_aesni 00:41:33.365 ************************************ 00:41:33.365 04:36:42 -- common/autotest_common.sh@1142 -- # return 0 00:41:33.365 04:36:42 -- spdk/autotest.sh@358 -- # run_test blockdev_crypto_sw /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:41:33.365 04:36:42 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:41:33.365 04:36:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:33.365 04:36:42 -- common/autotest_common.sh@10 -- # set +x 00:41:33.365 ************************************ 00:41:33.365 START TEST blockdev_crypto_sw 00:41:33.365 ************************************ 00:41:33.365 04:36:42 blockdev_crypto_sw -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:41:33.623 * Looking for test storage... 00:41:33.623 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/nbd_common.sh@6 -- # set -e 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@20 -- # : 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@673 -- # uname -s 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@681 -- # test_type=crypto_sw 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@682 -- # crypto_device= 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@683 -- # dek= 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@684 -- # env_ctx= 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@689 -- # [[ crypto_sw == bdev ]] 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@689 -- # [[ crypto_sw == crypto_* ]] 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2911319 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:41:33.623 04:36:42 blockdev_crypto_sw -- bdev/blockdev.sh@49 -- # waitforlisten 2911319 00:41:33.623 04:36:42 blockdev_crypto_sw -- common/autotest_common.sh@829 -- # '[' -z 2911319 ']' 00:41:33.623 04:36:42 blockdev_crypto_sw -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:33.623 04:36:42 blockdev_crypto_sw -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:33.623 04:36:42 blockdev_crypto_sw -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:33.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:33.623 04:36:42 blockdev_crypto_sw -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:33.623 04:36:42 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:33.623 [2024-07-23 04:36:42.315005] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:41:33.623 [2024-07-23 04:36:42.315101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2911319 ] 00:41:33.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.881 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:33.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.881 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:33.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.881 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:33.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.881 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:33.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.881 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:33.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.881 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:33.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.881 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:33.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.881 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:33.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.881 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:33.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.881 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:33.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.881 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:33.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.881 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:33.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.881 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:33.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.881 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:33.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.881 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:33.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.881 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:33.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.881 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:33.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.881 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:33.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.881 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:33.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.881 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:33.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.882 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:33.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.882 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:33.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.882 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:33.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.882 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:33.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.882 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:33.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.882 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:33.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.882 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:33.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.882 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:33.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.882 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:33.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.882 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:33.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.882 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:33.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.882 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:33.882 [2024-07-23 04:36:42.511965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:34.140 [2024-07-23 04:36:42.793453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:34.399 04:36:43 blockdev_crypto_sw -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:34.399 04:36:43 blockdev_crypto_sw -- common/autotest_common.sh@862 -- # return 0 00:41:34.399 04:36:43 blockdev_crypto_sw -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:41:34.399 04:36:43 blockdev_crypto_sw -- bdev/blockdev.sh@710 -- # setup_crypto_sw_conf 00:41:34.399 04:36:43 blockdev_crypto_sw -- bdev/blockdev.sh@192 -- # rpc_cmd 00:41:34.399 04:36:43 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:34.399 04:36:43 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:35.775 Malloc0 00:41:35.775 Malloc1 00:41:35.775 true 00:41:35.775 true 00:41:35.775 true 00:41:35.775 [2024-07-23 04:36:44.322033] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:41:35.775 crypto_ram 00:41:35.775 [2024-07-23 04:36:44.330042] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:41:35.775 crypto_ram2 00:41:35.775 [2024-07-23 04:36:44.338097] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:41:35.775 crypto_ram3 00:41:35.775 [ 00:41:35.775 { 00:41:35.775 "name": "Malloc1", 00:41:35.775 "aliases": [ 00:41:35.775 "920488a7-712e-46d5-8e42-033cf45c7924" 00:41:35.775 ], 00:41:35.775 "product_name": "Malloc disk", 00:41:35.775 "block_size": 4096, 00:41:35.775 "num_blocks": 4096, 00:41:35.775 "uuid": "920488a7-712e-46d5-8e42-033cf45c7924", 00:41:35.775 "assigned_rate_limits": { 00:41:35.775 "rw_ios_per_sec": 0, 00:41:35.775 "rw_mbytes_per_sec": 0, 00:41:35.775 "r_mbytes_per_sec": 0, 00:41:35.775 "w_mbytes_per_sec": 0 00:41:35.775 }, 00:41:35.775 "claimed": true, 00:41:35.775 "claim_type": "exclusive_write", 00:41:35.775 "zoned": false, 00:41:35.775 "supported_io_types": { 00:41:35.775 "read": true, 00:41:35.775 "write": true, 00:41:35.775 "unmap": true, 00:41:35.775 "flush": true, 00:41:35.775 "reset": true, 00:41:35.775 "nvme_admin": false, 00:41:35.775 "nvme_io": false, 00:41:35.775 "nvme_io_md": false, 00:41:35.775 "write_zeroes": true, 00:41:35.775 "zcopy": true, 00:41:35.775 "get_zone_info": false, 00:41:35.775 "zone_management": false, 00:41:35.775 "zone_append": false, 00:41:35.775 "compare": false, 00:41:35.775 "compare_and_write": false, 00:41:35.775 "abort": true, 00:41:35.775 "seek_hole": false, 00:41:35.775 "seek_data": false, 00:41:35.775 "copy": true, 00:41:35.775 "nvme_iov_md": false 00:41:35.775 }, 00:41:35.775 "memory_domains": [ 00:41:35.775 { 00:41:35.775 "dma_device_id": "system", 00:41:35.775 "dma_device_type": 1 00:41:35.775 }, 00:41:35.775 { 00:41:35.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:35.775 "dma_device_type": 2 00:41:35.775 } 00:41:35.775 ], 00:41:35.775 "driver_specific": {} 00:41:35.775 } 00:41:35.775 ] 00:41:35.775 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:35.775 04:36:44 blockdev_crypto_sw -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:41:35.775 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:35.775 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:35.775 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:35.775 04:36:44 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # cat 00:41:35.775 04:36:44 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:41:35.775 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:35.775 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:35.775 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:35.775 04:36:44 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:41:35.775 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:35.775 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:35.775 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:35.775 04:36:44 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:41:35.775 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:35.775 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:35.775 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:35.775 04:36:44 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:41:35.775 04:36:44 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:41:35.775 04:36:44 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:41:35.775 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:35.775 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:35.775 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:35.775 04:36:44 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:41:35.775 04:36:44 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "7ac526b6-b85b-52a1-93bb-286706c1365f"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "7ac526b6-b85b-52a1-93bb-286706c1365f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "2000d77e-677d-5704-b6d9-d40909ea1ba1"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "2000d77e-677d-5704-b6d9-d40909ea1ba1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:41:35.775 04:36:44 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # jq -r .name 00:41:35.775 04:36:44 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:41:35.775 04:36:44 blockdev_crypto_sw -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:41:35.775 04:36:44 blockdev_crypto_sw -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:41:35.775 04:36:44 blockdev_crypto_sw -- bdev/blockdev.sh@753 -- # killprocess 2911319 00:41:35.775 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@948 -- # '[' -z 2911319 ']' 00:41:35.775 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@952 -- # kill -0 2911319 00:41:35.775 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@953 -- # uname 00:41:36.034 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:36.034 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2911319 00:41:36.034 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:36.034 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:36.034 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2911319' 00:41:36.034 killing process with pid 2911319 00:41:36.034 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@967 -- # kill 2911319 00:41:36.034 04:36:44 blockdev_crypto_sw -- common/autotest_common.sh@972 -- # wait 2911319 00:41:39.314 04:36:48 blockdev_crypto_sw -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:41:39.314 04:36:48 blockdev_crypto_sw -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:41:39.314 04:36:48 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:41:39.314 04:36:48 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:39.314 04:36:48 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:39.314 ************************************ 00:41:39.314 START TEST bdev_hello_world 00:41:39.315 ************************************ 00:41:39.315 04:36:48 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:41:39.573 [2024-07-23 04:36:48.152342] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:41:39.573 [2024-07-23 04:36:48.152453] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2912191 ] 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:39.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.573 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:39.831 [2024-07-23 04:36:48.376352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:40.088 [2024-07-23 04:36:48.637505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:40.654 [2024-07-23 04:36:49.217968] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:41:40.654 [2024-07-23 04:36:49.218044] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:41:40.654 [2024-07-23 04:36:49.218064] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:40.654 [2024-07-23 04:36:49.225988] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:41:40.654 [2024-07-23 04:36:49.226028] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:41:40.654 [2024-07-23 04:36:49.226045] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:40.654 [2024-07-23 04:36:49.234000] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:41:40.654 [2024-07-23 04:36:49.234040] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:41:40.654 [2024-07-23 04:36:49.234056] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:40.654 [2024-07-23 04:36:49.324121] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:41:40.654 [2024-07-23 04:36:49.324163] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:41:40.654 [2024-07-23 04:36:49.324191] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:41:40.654 [2024-07-23 04:36:49.326413] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:41:40.654 [2024-07-23 04:36:49.326520] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:41:40.654 [2024-07-23 04:36:49.326541] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:41:40.654 [2024-07-23 04:36:49.326580] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:41:40.654 00:41:40.654 [2024-07-23 04:36:49.326603] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:41:42.556 00:41:42.556 real 0m3.045s 00:41:42.556 user 0m2.636s 00:41:42.556 sys 0m0.381s 00:41:42.556 04:36:51 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:42.556 04:36:51 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:41:42.556 ************************************ 00:41:42.556 END TEST bdev_hello_world 00:41:42.556 ************************************ 00:41:42.556 04:36:51 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:41:42.556 04:36:51 blockdev_crypto_sw -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:41:42.556 04:36:51 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:41:42.556 04:36:51 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:42.556 04:36:51 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:42.556 ************************************ 00:41:42.556 START TEST bdev_bounds 00:41:42.556 ************************************ 00:41:42.556 04:36:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:41:42.556 04:36:51 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=2912735 00:41:42.556 04:36:51 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:41:42.556 04:36:51 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:41:42.556 04:36:51 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 2912735' 00:41:42.556 Process bdevio pid: 2912735 00:41:42.556 04:36:51 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 2912735 00:41:42.556 04:36:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 2912735 ']' 00:41:42.556 04:36:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:42.556 04:36:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:42.556 04:36:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:42.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:42.556 04:36:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:42.556 04:36:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:41:42.556 [2024-07-23 04:36:51.286411] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:41:42.556 [2024-07-23 04:36:51.286532] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2912735 ] 00:41:42.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.814 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:42.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:42.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.815 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:42.815 [2024-07-23 04:36:51.512406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:43.074 [2024-07-23 04:36:51.812159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:43.074 [2024-07-23 04:36:51.812222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:43.074 [2024-07-23 04:36:51.812228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:41:43.642 [2024-07-23 04:36:52.413887] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:41:43.642 [2024-07-23 04:36:52.413957] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:41:43.642 [2024-07-23 04:36:52.413982] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:43.642 [2024-07-23 04:36:52.421901] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:41:43.642 [2024-07-23 04:36:52.421937] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:41:43.642 [2024-07-23 04:36:52.421953] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:43.901 [2024-07-23 04:36:52.429928] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:41:43.901 [2024-07-23 04:36:52.429965] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:41:43.901 [2024-07-23 04:36:52.429980] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:43.901 04:36:52 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:43.901 04:36:52 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:41:43.901 04:36:52 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:41:43.901 I/O targets: 00:41:43.901 crypto_ram: 32768 blocks of 512 bytes (16 MiB) 00:41:43.901 crypto_ram3: 4096 blocks of 4096 bytes (16 MiB) 00:41:43.901 00:41:43.901 00:41:43.901 CUnit - A unit testing framework for C - Version 2.1-3 00:41:43.901 http://cunit.sourceforge.net/ 00:41:43.901 00:41:43.901 00:41:43.901 Suite: bdevio tests on: crypto_ram3 00:41:43.901 Test: blockdev write read block ...passed 00:41:43.901 Test: blockdev write zeroes read block ...passed 00:41:43.901 Test: blockdev write zeroes read no split ...passed 00:41:44.160 Test: blockdev write zeroes read split ...passed 00:41:44.160 Test: blockdev write zeroes read split partial ...passed 00:41:44.160 Test: blockdev reset ...passed 00:41:44.160 Test: blockdev write read 8 blocks ...passed 00:41:44.160 Test: blockdev write read size > 128k ...passed 00:41:44.160 Test: blockdev write read invalid size ...passed 00:41:44.160 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:44.160 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:44.160 Test: blockdev write read max offset ...passed 00:41:44.160 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:44.160 Test: blockdev writev readv 8 blocks ...passed 00:41:44.160 Test: blockdev writev readv 30 x 1block ...passed 00:41:44.160 Test: blockdev writev readv block ...passed 00:41:44.160 Test: blockdev writev readv size > 128k ...passed 00:41:44.160 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:44.160 Test: blockdev comparev and writev ...passed 00:41:44.160 Test: blockdev nvme passthru rw ...passed 00:41:44.160 Test: blockdev nvme passthru vendor specific ...passed 00:41:44.160 Test: blockdev nvme admin passthru ...passed 00:41:44.160 Test: blockdev copy ...passed 00:41:44.160 Suite: bdevio tests on: crypto_ram 00:41:44.160 Test: blockdev write read block ...passed 00:41:44.160 Test: blockdev write zeroes read block ...passed 00:41:44.160 Test: blockdev write zeroes read no split ...passed 00:41:44.160 Test: blockdev write zeroes read split ...passed 00:41:44.160 Test: blockdev write zeroes read split partial ...passed 00:41:44.160 Test: blockdev reset ...passed 00:41:44.160 Test: blockdev write read 8 blocks ...passed 00:41:44.160 Test: blockdev write read size > 128k ...passed 00:41:44.160 Test: blockdev write read invalid size ...passed 00:41:44.160 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:44.160 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:44.160 Test: blockdev write read max offset ...passed 00:41:44.160 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:44.160 Test: blockdev writev readv 8 blocks ...passed 00:41:44.160 Test: blockdev writev readv 30 x 1block ...passed 00:41:44.160 Test: blockdev writev readv block ...passed 00:41:44.160 Test: blockdev writev readv size > 128k ...passed 00:41:44.160 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:44.160 Test: blockdev comparev and writev ...passed 00:41:44.160 Test: blockdev nvme passthru rw ...passed 00:41:44.160 Test: blockdev nvme passthru vendor specific ...passed 00:41:44.160 Test: blockdev nvme admin passthru ...passed 00:41:44.160 Test: blockdev copy ...passed 00:41:44.160 00:41:44.160 Run Summary: Type Total Ran Passed Failed Inactive 00:41:44.160 suites 2 2 n/a 0 0 00:41:44.160 tests 46 46 46 0 0 00:41:44.160 asserts 260 260 260 0 n/a 00:41:44.160 00:41:44.160 Elapsed time = 0.577 seconds 00:41:44.160 0 00:41:44.160 04:36:52 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 2912735 00:41:44.160 04:36:52 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 2912735 ']' 00:41:44.160 04:36:52 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 2912735 00:41:44.160 04:36:52 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:41:44.160 04:36:52 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:44.160 04:36:52 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2912735 00:41:44.160 04:36:52 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:44.160 04:36:52 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:44.161 04:36:52 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2912735' 00:41:44.161 killing process with pid 2912735 00:41:44.161 04:36:52 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@967 -- # kill 2912735 00:41:44.161 04:36:52 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@972 -- # wait 2912735 00:41:46.065 04:36:54 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:41:46.065 00:41:46.065 real 0m3.564s 00:41:46.065 user 0m8.267s 00:41:46.065 sys 0m0.546s 00:41:46.065 04:36:54 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:46.065 04:36:54 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:41:46.065 ************************************ 00:41:46.065 END TEST bdev_bounds 00:41:46.065 ************************************ 00:41:46.065 04:36:54 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:41:46.066 04:36:54 blockdev_crypto_sw -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:41:46.066 04:36:54 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:41:46.066 04:36:54 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:46.066 04:36:54 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:46.066 ************************************ 00:41:46.066 START TEST bdev_nbd 00:41:46.066 ************************************ 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram3') 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=2 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=2 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=2913303 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 2913303 /var/tmp/spdk-nbd.sock 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 2913303 ']' 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:41:46.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:46.066 04:36:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:41:46.325 [2024-07-23 04:36:54.943449] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:41:46.325 [2024-07-23 04:36:54.943556] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:46.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.325 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:46.584 [2024-07-23 04:36:55.171366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:46.844 [2024-07-23 04:36:55.459545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:47.413 [2024-07-23 04:36:56.054181] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:41:47.413 [2024-07-23 04:36:56.054260] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:41:47.413 [2024-07-23 04:36:56.054279] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:47.413 [2024-07-23 04:36:56.062212] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:41:47.413 [2024-07-23 04:36:56.062251] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:41:47.413 [2024-07-23 04:36:56.062267] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:47.413 [2024-07-23 04:36:56.070246] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:41:47.413 [2024-07-23 04:36:56.070281] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:41:47.413 [2024-07-23 04:36:56.070296] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:47.413 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:47.413 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:41:47.413 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:41:47.413 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:47.413 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:41:47.413 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:41:47.413 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:41:47.413 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:47.413 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:41:47.413 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:41:47.413 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:41:47.413 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:41:47.413 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:41:47.413 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:41:47.413 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:41:47.672 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:41:47.672 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:41:47.672 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:41:47.672 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:41:47.672 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:41:47.672 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:41:47.672 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:41:47.672 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:41:47.672 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:41:47.672 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:41:47.672 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:41:47.672 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:47.672 1+0 records in 00:41:47.672 1+0 records out 00:41:47.672 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022882 s, 17.9 MB/s 00:41:47.672 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:47.932 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:41:47.932 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:47.932 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:41:47.932 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:41:47.932 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:41:47.932 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:41:47.932 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:41:47.932 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:41:47.932 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:41:48.192 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:41:48.192 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:41:48.192 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:41:48.192 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:41:48.192 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:41:48.192 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:41:48.192 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:41:48.192 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:41:48.192 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:41:48.192 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:48.192 1+0 records in 00:41:48.192 1+0 records out 00:41:48.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360825 s, 11.4 MB/s 00:41:48.192 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:48.192 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:41:48.192 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:48.192 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:41:48.192 04:36:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:41:48.192 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:41:48.192 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:41:48.192 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:48.192 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:41:48.192 { 00:41:48.192 "nbd_device": "/dev/nbd0", 00:41:48.192 "bdev_name": "crypto_ram" 00:41:48.192 }, 00:41:48.192 { 00:41:48.192 "nbd_device": "/dev/nbd1", 00:41:48.192 "bdev_name": "crypto_ram3" 00:41:48.192 } 00:41:48.192 ]' 00:41:48.192 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:41:48.192 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:41:48.192 { 00:41:48.192 "nbd_device": "/dev/nbd0", 00:41:48.192 "bdev_name": "crypto_ram" 00:41:48.192 }, 00:41:48.192 { 00:41:48.192 "nbd_device": "/dev/nbd1", 00:41:48.192 "bdev_name": "crypto_ram3" 00:41:48.192 } 00:41:48.192 ]' 00:41:48.192 04:36:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:41:48.451 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:41:48.451 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:48.451 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:48.451 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:48.451 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:41:48.451 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:48.451 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:41:48.711 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:48.711 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:48.711 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:48.711 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:48.711 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:48.711 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:48.711 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:48.711 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:48.711 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:48.711 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:41:48.712 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:41:48.987 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:41:48.987 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:41:48.987 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:48.987 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:48.987 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:41:48.987 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:48.987 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:48.987 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:41:48.987 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:48.987 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:48.987 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:41:48.987 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:41:48.987 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:49.256 04:36:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:41:49.256 /dev/nbd0 00:41:49.256 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:49.256 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:49.256 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:41:49.256 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:41:49.256 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:41:49.256 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:41:49.515 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:41:49.515 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:41:49.515 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:41:49.515 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:41:49.515 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:49.515 1+0 records in 00:41:49.515 1+0 records out 00:41:49.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202893 s, 20.2 MB/s 00:41:49.515 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:49.515 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:41:49.515 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:49.515 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:41:49.515 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:41:49.515 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:49.515 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:49.515 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd1 00:41:49.515 /dev/nbd1 00:41:49.775 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:41:49.775 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:41:49.775 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:41:49.775 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:41:49.775 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:41:49.775 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:41:49.775 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:41:49.775 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:41:49.775 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:41:49.775 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:41:49.775 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:49.775 1+0 records in 00:41:49.775 1+0 records out 00:41:49.775 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260798 s, 15.7 MB/s 00:41:49.775 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:49.775 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:41:49.775 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:49.775 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:41:49.775 04:36:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:41:49.775 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:49.775 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:49.775 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:41:49.775 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:49.775 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:41:50.035 { 00:41:50.035 "nbd_device": "/dev/nbd0", 00:41:50.035 "bdev_name": "crypto_ram" 00:41:50.035 }, 00:41:50.035 { 00:41:50.035 "nbd_device": "/dev/nbd1", 00:41:50.035 "bdev_name": "crypto_ram3" 00:41:50.035 } 00:41:50.035 ]' 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:41:50.035 { 00:41:50.035 "nbd_device": "/dev/nbd0", 00:41:50.035 "bdev_name": "crypto_ram" 00:41:50.035 }, 00:41:50.035 { 00:41:50.035 "nbd_device": "/dev/nbd1", 00:41:50.035 "bdev_name": "crypto_ram3" 00:41:50.035 } 00:41:50.035 ]' 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:41:50.035 /dev/nbd1' 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:41:50.035 /dev/nbd1' 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:41:50.035 256+0 records in 00:41:50.035 256+0 records out 00:41:50.035 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101474 s, 103 MB/s 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:41:50.035 256+0 records in 00:41:50.035 256+0 records out 00:41:50.035 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213897 s, 49.0 MB/s 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:41:50.035 256+0 records in 00:41:50.035 256+0 records out 00:41:50.035 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0373389 s, 28.1 MB/s 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:50.035 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:41:50.295 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:50.295 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:50.295 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:50.295 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:50.295 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:50.295 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:50.295 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:50.295 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:50.295 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:50.295 04:36:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:41:50.554 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:41:50.554 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:41:50.554 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:41:50.554 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:50.554 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:50.554 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:41:50.554 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:50.554 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:50.554 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:41:50.554 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:50.554 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:50.814 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:41:50.814 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:41:50.814 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:41:50.814 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:41:50.814 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:41:50.814 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:41:50.814 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:41:50.814 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:41:50.814 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:41:50.814 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:41:50.814 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:41:50.814 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:41:50.814 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:41:50.814 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:50.814 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:50.814 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:41:50.814 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:41:50.814 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:41:51.073 malloc_lvol_verify 00:41:51.073 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:41:51.332 2e5f8c39-acfe-4b19-b3de-7c71c16ba6d9 00:41:51.332 04:36:59 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:41:51.591 2dc348bb-ea29-4fa9-81f8-bdbd6e11f74e 00:41:51.591 04:37:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:41:51.850 /dev/nbd0 00:41:51.850 04:37:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:41:51.850 mke2fs 1.46.5 (30-Dec-2021) 00:41:51.850 Discarding device blocks: 0/4096 done 00:41:51.850 Creating filesystem with 4096 1k blocks and 1024 inodes 00:41:51.850 00:41:51.850 Allocating group tables: 0/1 done 00:41:51.850 Writing inode tables: 0/1 done 00:41:51.850 Creating journal (1024 blocks): done 00:41:51.850 Writing superblocks and filesystem accounting information: 0/1 done 00:41:51.850 00:41:51.850 04:37:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:41:51.850 04:37:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:41:51.850 04:37:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:51.850 04:37:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:41:51.850 04:37:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:51.850 04:37:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:41:51.850 04:37:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:51.850 04:37:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:41:52.109 04:37:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:52.109 04:37:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:52.109 04:37:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:52.109 04:37:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:52.109 04:37:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:52.109 04:37:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:52.109 04:37:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:52.109 04:37:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:52.109 04:37:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:41:52.109 04:37:00 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:41:52.109 04:37:00 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 2913303 00:41:52.109 04:37:00 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 2913303 ']' 00:41:52.109 04:37:00 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 2913303 00:41:52.109 04:37:00 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:41:52.109 04:37:00 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:52.109 04:37:00 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2913303 00:41:52.109 04:37:00 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:52.109 04:37:00 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:52.109 04:37:00 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2913303' 00:41:52.109 killing process with pid 2913303 00:41:52.109 04:37:00 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@967 -- # kill 2913303 00:41:52.109 04:37:00 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@972 -- # wait 2913303 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:41:54.014 00:41:54.014 real 0m7.819s 00:41:54.014 user 0m10.041s 00:41:54.014 sys 0m2.448s 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:41:54.014 ************************************ 00:41:54.014 END TEST bdev_nbd 00:41:54.014 ************************************ 00:41:54.014 04:37:02 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:41:54.014 04:37:02 blockdev_crypto_sw -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:41:54.014 04:37:02 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # '[' crypto_sw = nvme ']' 00:41:54.014 04:37:02 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # '[' crypto_sw = gpt ']' 00:41:54.014 04:37:02 blockdev_crypto_sw -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:41:54.014 04:37:02 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:41:54.014 04:37:02 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:54.014 04:37:02 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:54.014 ************************************ 00:41:54.014 START TEST bdev_fio 00:41:54.014 ************************************ 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:41:54.014 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:41:54.014 04:37:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:41:54.015 04:37:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:41:54.274 ************************************ 00:41:54.274 START TEST bdev_fio_rw_verify 00:41:54.274 ************************************ 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:54.274 04:37:02 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:41:54.841 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:41:54.841 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:41:54.841 fio-3.35 00:41:54.841 Starting 2 threads 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:54.841 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.841 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:07.050 00:42:07.050 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=2915135: Tue Jul 23 04:37:14 2024 00:42:07.050 read: IOPS=21.9k, BW=85.6MiB/s (89.8MB/s)(856MiB/10001msec) 00:42:07.050 slat (usec): min=14, max=126, avg=20.37, stdev= 3.95 00:42:07.050 clat (usec): min=7, max=507, avg=145.73, stdev=59.00 00:42:07.050 lat (usec): min=28, max=552, avg=166.10, stdev=60.55 00:42:07.050 clat percentiles (usec): 00:42:07.050 | 50.000th=[ 143], 99.000th=[ 281], 99.900th=[ 330], 99.990th=[ 416], 00:42:07.050 | 99.999th=[ 482] 00:42:07.050 write: IOPS=26.4k, BW=103MiB/s (108MB/s)(979MiB/9490msec); 0 zone resets 00:42:07.050 slat (usec): min=14, max=241, avg=33.92, stdev= 5.56 00:42:07.050 clat (usec): min=26, max=1073, avg=194.85, stdev=90.73 00:42:07.050 lat (usec): min=52, max=1255, avg=228.77, stdev=92.66 00:42:07.050 clat percentiles (usec): 00:42:07.050 | 50.000th=[ 190], 99.000th=[ 392], 99.900th=[ 474], 99.990th=[ 742], 00:42:07.050 | 99.999th=[ 922] 00:42:07.050 bw ( KiB/s): min=93680, max=106976, per=95.37%, avg=100757.16, stdev=2065.83, samples=38 00:42:07.050 iops : min=23420, max=26744, avg=25189.26, stdev=516.45, samples=38 00:42:07.050 lat (usec) : 10=0.01%, 20=0.01%, 50=4.99%, 100=14.90%, 250=63.00% 00:42:07.050 lat (usec) : 500=17.06%, 750=0.03%, 1000=0.01% 00:42:07.050 lat (msec) : 2=0.01% 00:42:07.050 cpu : usr=99.26%, sys=0.32%, ctx=34, majf=0, minf=19262 00:42:07.050 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:07.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:07.050 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:07.050 issued rwts: total=219168,250653,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:07.050 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:07.050 00:42:07.050 Run status group 0 (all jobs): 00:42:07.050 READ: bw=85.6MiB/s (89.8MB/s), 85.6MiB/s-85.6MiB/s (89.8MB/s-89.8MB/s), io=856MiB (898MB), run=10001-10001msec 00:42:07.050 WRITE: bw=103MiB/s (108MB/s), 103MiB/s-103MiB/s (108MB/s-108MB/s), io=979MiB (1027MB), run=9490-9490msec 00:42:07.619 ----------------------------------------------------- 00:42:07.619 Suppressions used: 00:42:07.619 count bytes template 00:42:07.619 2 23 /usr/src/fio/parse.c 00:42:07.619 1838 176448 /usr/src/fio/iolog.c 00:42:07.619 1 8 libtcmalloc_minimal.so 00:42:07.619 1 904 libcrypto.so 00:42:07.619 ----------------------------------------------------- 00:42:07.619 00:42:07.619 00:42:07.619 real 0m13.449s 00:42:07.619 user 0m34.801s 00:42:07.619 sys 0m0.744s 00:42:07.619 04:37:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:07.619 04:37:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:42:07.619 ************************************ 00:42:07.619 END TEST bdev_fio_rw_verify 00:42:07.619 ************************************ 00:42:07.619 04:37:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:42:07.619 04:37:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:42:07.619 04:37:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:42:07.619 04:37:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:42:07.619 04:37:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:42:07.619 04:37:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:42:07.619 04:37:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:42:07.619 04:37:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:42:07.619 04:37:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:42:07.619 04:37:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:42:07.619 04:37:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:42:07.619 04:37:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:42:07.619 04:37:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:42:07.619 04:37:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:42:07.619 04:37:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:42:07.619 04:37:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:42:07.619 04:37:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:42:07.619 04:37:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "7ac526b6-b85b-52a1-93bb-286706c1365f"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "7ac526b6-b85b-52a1-93bb-286706c1365f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "2000d77e-677d-5704-b6d9-d40909ea1ba1"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "2000d77e-677d-5704-b6d9-d40909ea1ba1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:42:07.619 04:37:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:42:07.879 crypto_ram3 ]] 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "7ac526b6-b85b-52a1-93bb-286706c1365f"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "7ac526b6-b85b-52a1-93bb-286706c1365f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "2000d77e-677d-5704-b6d9-d40909ea1ba1"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "2000d77e-677d-5704-b6d9-d40909ea1ba1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:42:07.879 ************************************ 00:42:07.879 START TEST bdev_fio_trim 00:42:07.879 ************************************ 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:07.879 04:37:16 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:42:08.470 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:42:08.470 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:42:08.470 fio-3.35 00:42:08.470 Starting 2 threads 00:42:08.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.470 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:08.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.470 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:08.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.470 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:08.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.470 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:08.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.470 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:08.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.470 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:08.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.470 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:08.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.470 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:08.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.470 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:08.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.470 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:08.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.470 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:08.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.470 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:08.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.470 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:08.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.470 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:08.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.470 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:08.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.470 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:08.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.470 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:08.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.470 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:08.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.470 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:08.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.470 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:08.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.471 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:08.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.471 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:08.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.471 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:08.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.471 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:08.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.471 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:08.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.471 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:08.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.471 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:08.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.471 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:08.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.471 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:08.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.471 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:08.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.471 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:08.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:08.471 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:20.683 00:42:20.683 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=2917397: Tue Jul 23 04:37:28 2024 00:42:20.683 write: IOPS=39.8k, BW=155MiB/s (163MB/s)(1555MiB/10001msec); 0 zone resets 00:42:20.683 slat (usec): min=14, max=287, avg=22.03, stdev= 4.18 00:42:20.683 clat (usec): min=37, max=626, avg=164.54, stdev=92.59 00:42:20.683 lat (usec): min=51, max=694, avg=186.58, stdev=95.81 00:42:20.683 clat percentiles (usec): 00:42:20.683 | 50.000th=[ 133], 99.000th=[ 338], 99.900th=[ 359], 99.990th=[ 457], 00:42:20.683 | 99.999th=[ 594] 00:42:20.683 bw ( KiB/s): min=155688, max=160248, per=100.00%, avg=159403.37, stdev=511.49, samples=38 00:42:20.683 iops : min=38922, max=40062, avg=39850.84, stdev=127.87, samples=38 00:42:20.683 trim: IOPS=39.8k, BW=155MiB/s (163MB/s)(1555MiB/10001msec); 0 zone resets 00:42:20.683 slat (nsec): min=5793, max=55679, avg=10176.55, stdev=2325.16 00:42:20.683 clat (usec): min=44, max=471, avg=109.45, stdev=33.12 00:42:20.683 lat (usec): min=52, max=521, avg=119.63, stdev=33.26 00:42:20.683 clat percentiles (usec): 00:42:20.683 | 50.000th=[ 110], 99.000th=[ 178], 99.900th=[ 188], 99.990th=[ 249], 00:42:20.683 | 99.999th=[ 412] 00:42:20.683 bw ( KiB/s): min=155712, max=160248, per=100.00%, avg=159405.05, stdev=510.14, samples=38 00:42:20.683 iops : min=38928, max=40062, avg=39851.26, stdev=127.54, samples=38 00:42:20.683 lat (usec) : 50=3.50%, 100=34.89%, 250=47.91%, 500=13.69%, 750=0.01% 00:42:20.683 cpu : usr=99.58%, sys=0.04%, ctx=32, majf=0, minf=2107 00:42:20.683 IO depths : 1=7.5%, 2=17.4%, 4=60.1%, 8=15.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:20.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:20.684 complete : 0=0.0%, 4=86.9%, 8=13.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:20.684 issued rwts: total=0,398098,398098,0 short=0,0,0,0 dropped=0,0,0,0 00:42:20.684 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:20.684 00:42:20.684 Run status group 0 (all jobs): 00:42:20.684 WRITE: bw=155MiB/s (163MB/s), 155MiB/s-155MiB/s (163MB/s-163MB/s), io=1555MiB (1631MB), run=10001-10001msec 00:42:20.684 TRIM: bw=155MiB/s (163MB/s), 155MiB/s-155MiB/s (163MB/s-163MB/s), io=1555MiB (1631MB), run=10001-10001msec 00:42:21.249 ----------------------------------------------------- 00:42:21.249 Suppressions used: 00:42:21.249 count bytes template 00:42:21.249 2 23 /usr/src/fio/parse.c 00:42:21.249 1 8 libtcmalloc_minimal.so 00:42:21.249 1 904 libcrypto.so 00:42:21.249 ----------------------------------------------------- 00:42:21.249 00:42:21.249 00:42:21.249 real 0m13.480s 00:42:21.249 user 0m34.658s 00:42:21.249 sys 0m0.678s 00:42:21.249 04:37:29 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:21.249 04:37:29 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:42:21.249 ************************************ 00:42:21.249 END TEST bdev_fio_trim 00:42:21.249 ************************************ 00:42:21.249 04:37:30 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:42:21.249 04:37:30 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:42:21.249 04:37:30 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:42:21.505 04:37:30 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:42:21.505 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:42:21.505 04:37:30 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:42:21.505 00:42:21.505 real 0m27.298s 00:42:21.505 user 1m9.651s 00:42:21.505 sys 0m1.620s 00:42:21.505 04:37:30 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:21.505 04:37:30 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:42:21.505 ************************************ 00:42:21.505 END TEST bdev_fio 00:42:21.505 ************************************ 00:42:21.505 04:37:30 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:42:21.505 04:37:30 blockdev_crypto_sw -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:42:21.505 04:37:30 blockdev_crypto_sw -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:42:21.505 04:37:30 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:42:21.505 04:37:30 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:21.505 04:37:30 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:42:21.505 ************************************ 00:42:21.505 START TEST bdev_verify 00:42:21.505 ************************************ 00:42:21.505 04:37:30 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:42:21.762 [2024-07-23 04:37:30.327963] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:21.762 [2024-07-23 04:37:30.328230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2919486 ] 00:42:22.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.019 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:22.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.019 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:22.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.019 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:22.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.019 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:22.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.019 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:22.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.019 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:22.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.019 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:22.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.019 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:22.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.019 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:22.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.019 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:22.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.019 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:22.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.019 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:22.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.019 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:22.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.019 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:22.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.019 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:22.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.019 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:22.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.019 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:22.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.019 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:22.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.019 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:22.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.019 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:22.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.020 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:22.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.020 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:22.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.020 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:22.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.020 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:22.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.020 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:22.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.020 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:22.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.020 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:22.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.020 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:22.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.020 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:22.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.020 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:22.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.020 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:22.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:22.020 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:22.020 [2024-07-23 04:37:30.688817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:22.277 [2024-07-23 04:37:30.957292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:22.277 [2024-07-23 04:37:30.957297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:22.840 [2024-07-23 04:37:31.522691] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:42:22.840 [2024-07-23 04:37:31.522774] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:42:22.840 [2024-07-23 04:37:31.522794] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:22.840 [2024-07-23 04:37:31.530708] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:42:22.840 [2024-07-23 04:37:31.530745] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:42:22.840 [2024-07-23 04:37:31.530762] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:22.840 [2024-07-23 04:37:31.538737] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:42:22.840 [2024-07-23 04:37:31.538771] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:42:22.840 [2024-07-23 04:37:31.538786] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:23.097 Running I/O for 5 seconds... 00:42:28.354 00:42:28.354 Latency(us) 00:42:28.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:28.354 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:42:28.354 Verification LBA range: start 0x0 length 0x800 00:42:28.354 crypto_ram : 5.02 6319.66 24.69 0.00 0.00 20178.30 1795.69 24222.11 00:42:28.354 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:42:28.354 Verification LBA range: start 0x800 length 0x800 00:42:28.354 crypto_ram : 5.02 6351.04 24.81 0.00 0.00 20080.55 1848.12 24117.25 00:42:28.354 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:42:28.354 Verification LBA range: start 0x0 length 0x800 00:42:28.354 crypto_ram3 : 5.03 3158.09 12.34 0.00 0.00 40315.45 8021.61 29150.41 00:42:28.354 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:42:28.354 Verification LBA range: start 0x800 length 0x800 00:42:28.354 crypto_ram3 : 5.03 3183.41 12.44 0.00 0.00 39988.30 2346.19 29150.41 00:42:28.354 =================================================================================================================== 00:42:28.354 Total : 19012.19 74.27 0.00 0.00 26812.24 1795.69 29150.41 00:42:29.724 00:42:29.725 real 0m8.354s 00:42:29.725 user 0m14.704s 00:42:29.725 sys 0m0.509s 00:42:29.725 04:37:38 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:29.725 04:37:38 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:42:29.725 ************************************ 00:42:29.725 END TEST bdev_verify 00:42:29.725 ************************************ 00:42:29.981 04:37:38 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:42:29.981 04:37:38 blockdev_crypto_sw -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:42:29.981 04:37:38 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:42:29.981 04:37:38 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:29.981 04:37:38 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:42:29.981 ************************************ 00:42:29.981 START TEST bdev_verify_big_io 00:42:29.981 ************************************ 00:42:29.981 04:37:38 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:42:29.981 [2024-07-23 04:37:38.665246] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:29.981 [2024-07-23 04:37:38.665359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2920841 ] 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:30.239 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.239 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:30.239 [2024-07-23 04:37:38.890833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:30.497 [2024-07-23 04:37:39.167955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:30.497 [2024-07-23 04:37:39.167961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:31.061 [2024-07-23 04:37:39.750739] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:42:31.061 [2024-07-23 04:37:39.750815] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:42:31.061 [2024-07-23 04:37:39.750834] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:31.061 [2024-07-23 04:37:39.758753] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:42:31.061 [2024-07-23 04:37:39.758791] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:42:31.061 [2024-07-23 04:37:39.758807] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:31.061 [2024-07-23 04:37:39.766783] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:42:31.061 [2024-07-23 04:37:39.766816] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:42:31.061 [2024-07-23 04:37:39.766830] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:31.318 Running I/O for 5 seconds... 00:42:36.572 00:42:36.572 Latency(us) 00:42:36.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:36.572 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:42:36.572 Verification LBA range: start 0x0 length 0x80 00:42:36.572 crypto_ram : 5.03 483.13 30.20 0.00 0.00 259019.01 6710.89 348966.09 00:42:36.572 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:42:36.572 Verification LBA range: start 0x80 length 0x80 00:42:36.572 crypto_ram : 5.05 481.40 30.09 0.00 0.00 259801.84 6815.74 352321.54 00:42:36.572 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:42:36.572 Verification LBA range: start 0x0 length 0x80 00:42:36.572 crypto_ram3 : 5.21 270.36 16.90 0.00 0.00 446333.83 5845.81 364065.59 00:42:36.572 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:42:36.572 Verification LBA range: start 0x80 length 0x80 00:42:36.572 crypto_ram3 : 5.22 269.52 16.85 0.00 0.00 447425.18 5819.60 365743.31 00:42:36.572 =================================================================================================================== 00:42:36.572 Total : 1504.42 94.03 0.00 0.00 328149.09 5819.60 365743.31 00:42:38.469 00:42:38.469 real 0m8.485s 00:42:38.469 user 0m15.321s 00:42:38.469 sys 0m0.392s 00:42:38.469 04:37:47 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:38.469 04:37:47 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:42:38.469 ************************************ 00:42:38.469 END TEST bdev_verify_big_io 00:42:38.469 ************************************ 00:42:38.469 04:37:47 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:42:38.469 04:37:47 blockdev_crypto_sw -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:38.469 04:37:47 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:42:38.469 04:37:47 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:38.469 04:37:47 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:42:38.469 ************************************ 00:42:38.469 START TEST bdev_write_zeroes 00:42:38.469 ************************************ 00:42:38.469 04:37:47 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:38.469 [2024-07-23 04:37:47.231429] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:38.469 [2024-07-23 04:37:47.231547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922171 ] 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:38.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.727 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:38.727 [2024-07-23 04:37:47.456920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:38.986 [2024-07-23 04:37:47.735997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:39.611 [2024-07-23 04:37:48.324978] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:42:39.611 [2024-07-23 04:37:48.325059] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:42:39.611 [2024-07-23 04:37:48.325079] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:39.611 [2024-07-23 04:37:48.332991] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:42:39.611 [2024-07-23 04:37:48.333030] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:42:39.611 [2024-07-23 04:37:48.333047] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:39.611 [2024-07-23 04:37:48.341015] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:42:39.611 [2024-07-23 04:37:48.341050] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:42:39.611 [2024-07-23 04:37:48.341065] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:39.868 Running I/O for 1 seconds... 00:42:40.800 00:42:40.800 Latency(us) 00:42:40.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:40.800 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:42:40.800 crypto_ram : 1.01 26734.22 104.43 0.00 0.00 4775.35 1291.06 6658.46 00:42:40.800 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:42:40.800 crypto_ram3 : 1.01 13411.34 52.39 0.00 0.00 9477.02 1671.17 9909.04 00:42:40.800 =================================================================================================================== 00:42:40.800 Total : 40145.56 156.82 0.00 0.00 6352.50 1291.06 9909.04 00:42:42.697 00:42:42.697 real 0m4.149s 00:42:42.697 user 0m3.743s 00:42:42.697 sys 0m0.366s 00:42:42.697 04:37:51 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:42.697 04:37:51 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:42:42.697 ************************************ 00:42:42.697 END TEST bdev_write_zeroes 00:42:42.697 ************************************ 00:42:42.697 04:37:51 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:42:42.697 04:37:51 blockdev_crypto_sw -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:42.697 04:37:51 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:42:42.697 04:37:51 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:42.697 04:37:51 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:42:42.697 ************************************ 00:42:42.697 START TEST bdev_json_nonenclosed 00:42:42.697 ************************************ 00:42:42.697 04:37:51 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:42.697 [2024-07-23 04:37:51.451210] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:42.698 [2024-07-23 04:37:51.451328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922787 ] 00:42:42.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.955 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:42.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.955 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:42.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.955 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:42.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.955 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:42.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.955 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:42.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.955 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:42.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.955 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:42.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.955 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:42.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.955 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:42.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.955 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:42.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.955 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:42.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.955 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:42.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.955 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:42.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.955 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:42.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.955 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:42.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.955 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:42.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.955 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:42.955 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.956 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:42.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.956 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:42.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.956 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:42.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.956 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:42.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.956 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:42.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.956 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:42.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.956 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:42.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.956 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:42.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.956 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:42.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.956 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:42.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.956 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:42.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.956 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:42.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.956 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:42.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.956 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:42.956 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:42.956 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:42.956 [2024-07-23 04:37:51.678522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:43.213 [2024-07-23 04:37:51.946584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:43.213 [2024-07-23 04:37:51.946669] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:42:43.213 [2024-07-23 04:37:51.946697] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:42:43.213 [2024-07-23 04:37:51.946712] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:43.783 00:42:43.783 real 0m1.157s 00:42:43.783 user 0m0.890s 00:42:43.783 sys 0m0.261s 00:42:43.783 04:37:52 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:42:43.783 04:37:52 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:43.783 04:37:52 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:42:43.783 ************************************ 00:42:43.783 END TEST bdev_json_nonenclosed 00:42:43.783 ************************************ 00:42:43.783 04:37:52 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 234 00:42:43.783 04:37:52 blockdev_crypto_sw -- bdev/blockdev.sh@781 -- # true 00:42:43.783 04:37:52 blockdev_crypto_sw -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:43.783 04:37:52 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:42:43.783 04:37:52 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:43.783 04:37:52 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:42:44.040 ************************************ 00:42:44.040 START TEST bdev_json_nonarray 00:42:44.040 ************************************ 00:42:44.040 04:37:52 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:44.040 [2024-07-23 04:37:52.687247] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:44.040 [2024-07-23 04:37:52.687364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2923005 ] 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.040 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:44.297 [2024-07-23 04:37:52.901523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:44.554 [2024-07-23 04:37:53.172657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:44.554 [2024-07-23 04:37:53.172754] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:42:44.554 [2024-07-23 04:37:53.172782] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:42:44.554 [2024-07-23 04:37:53.172797] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:45.119 00:42:45.119 real 0m1.151s 00:42:45.119 user 0m0.890s 00:42:45.119 sys 0m0.255s 00:42:45.119 04:37:53 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:42:45.119 04:37:53 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:45.119 04:37:53 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:42:45.119 ************************************ 00:42:45.119 END TEST bdev_json_nonarray 00:42:45.119 ************************************ 00:42:45.119 04:37:53 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 234 00:42:45.119 04:37:53 blockdev_crypto_sw -- bdev/blockdev.sh@784 -- # true 00:42:45.119 04:37:53 blockdev_crypto_sw -- bdev/blockdev.sh@786 -- # [[ crypto_sw == bdev ]] 00:42:45.119 04:37:53 blockdev_crypto_sw -- bdev/blockdev.sh@793 -- # [[ crypto_sw == gpt ]] 00:42:45.119 04:37:53 blockdev_crypto_sw -- bdev/blockdev.sh@797 -- # [[ crypto_sw == crypto_sw ]] 00:42:45.119 04:37:53 blockdev_crypto_sw -- bdev/blockdev.sh@798 -- # run_test bdev_crypto_enomem bdev_crypto_enomem 00:42:45.119 04:37:53 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:45.119 04:37:53 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:45.119 04:37:53 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:42:45.119 ************************************ 00:42:45.119 START TEST bdev_crypto_enomem 00:42:45.119 ************************************ 00:42:45.119 04:37:53 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1123 -- # bdev_crypto_enomem 00:42:45.119 04:37:53 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@634 -- # local base_dev=base0 00:42:45.119 04:37:53 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@635 -- # local test_dev=crypt0 00:42:45.119 04:37:53 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@636 -- # local err_dev=EE_base0 00:42:45.119 04:37:53 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@637 -- # local qd=32 00:42:45.119 04:37:53 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@640 -- # ERR_PID=2923285 00:42:45.119 04:37:53 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@641 -- # trap 'cleanup; killprocess $ERR_PID; exit 1' SIGINT SIGTERM EXIT 00:42:45.119 04:37:53 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@642 -- # waitforlisten 2923285 00:42:45.119 04:37:53 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 32 -o 4096 -w randwrite -t 5 -f '' 00:42:45.119 04:37:53 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@829 -- # '[' -z 2923285 ']' 00:42:45.119 04:37:53 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:45.119 04:37:53 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:45.119 04:37:53 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:45.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:45.119 04:37:53 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:45.119 04:37:53 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:42:45.377 [2024-07-23 04:37:53.931127] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:45.377 [2024-07-23 04:37:53.931263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2923285 ] 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:45.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:45.377 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:45.377 [2024-07-23 04:37:54.146207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:45.942 [2024-07-23 04:37:54.427740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:46.199 04:37:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:46.199 04:37:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@862 -- # return 0 00:42:46.199 04:37:54 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@644 -- # rpc_cmd 00:42:46.199 04:37:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:46.199 04:37:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:42:46.199 true 00:42:46.199 base0 00:42:46.199 true 00:42:46.199 [2024-07-23 04:37:54.967876] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:42:46.199 crypt0 00:42:46.199 04:37:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:46.199 04:37:54 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@651 -- # waitforbdev crypt0 00:42:46.199 04:37:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@897 -- # local bdev_name=crypt0 00:42:46.199 04:37:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:42:46.199 04:37:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@899 -- # local i 00:42:46.199 04:37:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:42:46.199 04:37:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:42:46.199 04:37:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:42:46.199 04:37:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:46.199 04:37:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:42:46.456 04:37:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:46.456 04:37:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b crypt0 -t 2000 00:42:46.456 04:37:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:46.456 04:37:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:42:46.456 [ 00:42:46.456 { 00:42:46.456 "name": "crypt0", 00:42:46.456 "aliases": [ 00:42:46.456 "39180aee-4b21-52f3-b69b-b32e140373e4" 00:42:46.456 ], 00:42:46.456 "product_name": "crypto", 00:42:46.456 "block_size": 512, 00:42:46.456 "num_blocks": 2097152, 00:42:46.456 "uuid": "39180aee-4b21-52f3-b69b-b32e140373e4", 00:42:46.456 "assigned_rate_limits": { 00:42:46.456 "rw_ios_per_sec": 0, 00:42:46.456 "rw_mbytes_per_sec": 0, 00:42:46.456 "r_mbytes_per_sec": 0, 00:42:46.456 "w_mbytes_per_sec": 0 00:42:46.456 }, 00:42:46.456 "claimed": false, 00:42:46.456 "zoned": false, 00:42:46.456 "supported_io_types": { 00:42:46.456 "read": true, 00:42:46.456 "write": true, 00:42:46.456 "unmap": false, 00:42:46.456 "flush": false, 00:42:46.456 "reset": true, 00:42:46.456 "nvme_admin": false, 00:42:46.456 "nvme_io": false, 00:42:46.456 "nvme_io_md": false, 00:42:46.456 "write_zeroes": true, 00:42:46.456 "zcopy": false, 00:42:46.456 "get_zone_info": false, 00:42:46.456 "zone_management": false, 00:42:46.456 "zone_append": false, 00:42:46.456 "compare": false, 00:42:46.456 "compare_and_write": false, 00:42:46.456 "abort": false, 00:42:46.456 "seek_hole": false, 00:42:46.456 "seek_data": false, 00:42:46.456 "copy": false, 00:42:46.456 "nvme_iov_md": false 00:42:46.457 }, 00:42:46.457 "memory_domains": [ 00:42:46.457 { 00:42:46.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:46.457 "dma_device_type": 2 00:42:46.457 } 00:42:46.457 ], 00:42:46.457 "driver_specific": { 00:42:46.457 "crypto": { 00:42:46.457 "base_bdev_name": "EE_base0", 00:42:46.457 "name": "crypt0", 00:42:46.457 "key_name": "test_dek_sw" 00:42:46.457 } 00:42:46.457 } 00:42:46.457 } 00:42:46.457 ] 00:42:46.457 04:37:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:46.457 04:37:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@905 -- # return 0 00:42:46.457 04:37:55 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@654 -- # rpcpid=2923492 00:42:46.457 04:37:55 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@656 -- # sleep 1 00:42:46.457 04:37:55 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:42:46.457 Running I/O for 5 seconds... 00:42:47.387 04:37:56 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@657 -- # rpc_cmd bdev_error_inject_error EE_base0 -n 5 -q 31 write nomem 00:42:47.387 04:37:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:47.387 04:37:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:42:47.387 04:37:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:47.387 04:37:56 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@659 -- # wait 2923492 00:42:51.563 00:42:51.563 Latency(us) 00:42:51.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:51.563 Job: crypt0 (Core Mask 0x2, workload: randwrite, depth: 32, IO size: 4096) 00:42:51.563 crypt0 : 5.00 35954.43 140.45 0.00 0.00 885.64 429.26 1238.63 00:42:51.563 =================================================================================================================== 00:42:51.563 Total : 35954.43 140.45 0.00 0.00 885.64 429.26 1238.63 00:42:51.563 0 00:42:51.563 04:38:00 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@661 -- # rpc_cmd bdev_crypto_delete crypt0 00:42:51.563 04:38:00 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:51.563 04:38:00 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:42:51.563 04:38:00 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:51.563 04:38:00 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@663 -- # killprocess 2923285 00:42:51.563 04:38:00 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@948 -- # '[' -z 2923285 ']' 00:42:51.563 04:38:00 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@952 -- # kill -0 2923285 00:42:51.563 04:38:00 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@953 -- # uname 00:42:51.563 04:38:00 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:51.563 04:38:00 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2923285 00:42:51.563 04:38:00 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:42:51.563 04:38:00 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:42:51.563 04:38:00 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2923285' 00:42:51.563 killing process with pid 2923285 00:42:51.563 04:38:00 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@967 -- # kill 2923285 00:42:51.563 Received shutdown signal, test time was about 5.000000 seconds 00:42:51.563 00:42:51.563 Latency(us) 00:42:51.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:51.563 =================================================================================================================== 00:42:51.563 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:51.563 04:38:00 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@972 -- # wait 2923285 00:42:53.461 04:38:02 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@664 -- # trap - SIGINT SIGTERM EXIT 00:42:53.461 00:42:53.461 real 0m8.221s 00:42:53.461 user 0m8.392s 00:42:53.461 sys 0m0.551s 00:42:53.461 04:38:02 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:53.461 04:38:02 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:42:53.461 ************************************ 00:42:53.461 END TEST bdev_crypto_enomem 00:42:53.461 ************************************ 00:42:53.461 04:38:02 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:42:53.461 04:38:02 blockdev_crypto_sw -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:42:53.461 04:38:02 blockdev_crypto_sw -- bdev/blockdev.sh@810 -- # cleanup 00:42:53.461 04:38:02 blockdev_crypto_sw -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:42:53.461 04:38:02 blockdev_crypto_sw -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:42:53.461 04:38:02 blockdev_crypto_sw -- bdev/blockdev.sh@26 -- # [[ crypto_sw == rbd ]] 00:42:53.461 04:38:02 blockdev_crypto_sw -- bdev/blockdev.sh@30 -- # [[ crypto_sw == daos ]] 00:42:53.461 04:38:02 blockdev_crypto_sw -- bdev/blockdev.sh@34 -- # [[ crypto_sw = \g\p\t ]] 00:42:53.461 04:38:02 blockdev_crypto_sw -- bdev/blockdev.sh@40 -- # [[ crypto_sw == xnvme ]] 00:42:53.461 00:42:53.461 real 1m20.009s 00:42:53.461 user 2m20.554s 00:42:53.461 sys 0m8.703s 00:42:53.461 04:38:02 blockdev_crypto_sw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:53.461 04:38:02 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:42:53.461 ************************************ 00:42:53.461 END TEST blockdev_crypto_sw 00:42:53.461 ************************************ 00:42:53.461 04:38:02 -- common/autotest_common.sh@1142 -- # return 0 00:42:53.461 04:38:02 -- spdk/autotest.sh@359 -- # run_test blockdev_crypto_qat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:42:53.461 04:38:02 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:42:53.461 04:38:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:53.461 04:38:02 -- common/autotest_common.sh@10 -- # set +x 00:42:53.461 ************************************ 00:42:53.461 START TEST blockdev_crypto_qat 00:42:53.461 ************************************ 00:42:53.461 04:38:02 blockdev_crypto_qat -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:42:53.719 * Looking for test storage... 00:42:53.719 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/nbd_common.sh@6 -- # set -e 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@20 -- # : 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@673 -- # uname -s 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@681 -- # test_type=crypto_qat 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@682 -- # crypto_device= 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@683 -- # dek= 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@684 -- # env_ctx= 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@689 -- # [[ crypto_qat == bdev ]] 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@689 -- # [[ crypto_qat == crypto_* ]] 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2924667 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:42:53.719 04:38:02 blockdev_crypto_qat -- bdev/blockdev.sh@49 -- # waitforlisten 2924667 00:42:53.719 04:38:02 blockdev_crypto_qat -- common/autotest_common.sh@829 -- # '[' -z 2924667 ']' 00:42:53.719 04:38:02 blockdev_crypto_qat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:53.719 04:38:02 blockdev_crypto_qat -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:53.719 04:38:02 blockdev_crypto_qat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:53.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:53.719 04:38:02 blockdev_crypto_qat -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:53.719 04:38:02 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:42:53.719 [2024-07-23 04:38:02.380771] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:42:53.719 [2024-07-23 04:38:02.380870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2924667 ] 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:53.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:53.719 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:53.977 [2024-07-23 04:38:02.577887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:54.234 [2024-07-23 04:38:02.840078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:54.492 04:38:03 blockdev_crypto_qat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:54.492 04:38:03 blockdev_crypto_qat -- common/autotest_common.sh@862 -- # return 0 00:42:54.492 04:38:03 blockdev_crypto_qat -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:42:54.492 04:38:03 blockdev_crypto_qat -- bdev/blockdev.sh@707 -- # setup_crypto_qat_conf 00:42:54.492 04:38:03 blockdev_crypto_qat -- bdev/blockdev.sh@169 -- # rpc_cmd 00:42:54.492 04:38:03 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:54.492 04:38:03 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:42:54.492 [2024-07-23 04:38:03.201782] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:42:54.492 [2024-07-23 04:38:03.209841] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:42:54.492 [2024-07-23 04:38:03.217854] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:42:55.054 [2024-07-23 04:38:03.551339] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:42:58.401 true 00:42:58.401 true 00:42:58.401 true 00:42:58.401 true 00:42:58.660 Malloc0 00:42:58.660 Malloc1 00:42:58.660 Malloc2 00:42:58.660 Malloc3 00:42:58.660 [2024-07-23 04:38:07.392687] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:42:58.660 crypto_ram 00:42:58.660 [2024-07-23 04:38:07.400842] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:42:58.660 crypto_ram1 00:42:58.660 [2024-07-23 04:38:07.409008] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:42:58.660 crypto_ram2 00:42:58.660 [2024-07-23 04:38:07.417045] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:42:58.660 crypto_ram3 00:42:58.660 [ 00:42:58.660 { 00:42:58.660 "name": "Malloc1", 00:42:58.660 "aliases": [ 00:42:58.660 "7964b247-79c0-474f-8901-afafe73794cf" 00:42:58.660 ], 00:42:58.660 "product_name": "Malloc disk", 00:42:58.660 "block_size": 512, 00:42:58.660 "num_blocks": 65536, 00:42:58.660 "uuid": "7964b247-79c0-474f-8901-afafe73794cf", 00:42:58.660 "assigned_rate_limits": { 00:42:58.660 "rw_ios_per_sec": 0, 00:42:58.660 "rw_mbytes_per_sec": 0, 00:42:58.660 "r_mbytes_per_sec": 0, 00:42:58.660 "w_mbytes_per_sec": 0 00:42:58.660 }, 00:42:58.660 "claimed": true, 00:42:58.660 "claim_type": "exclusive_write", 00:42:58.660 "zoned": false, 00:42:58.660 "supported_io_types": { 00:42:58.660 "read": true, 00:42:58.660 "write": true, 00:42:58.660 "unmap": true, 00:42:58.660 "flush": true, 00:42:58.660 "reset": true, 00:42:58.660 "nvme_admin": false, 00:42:58.660 "nvme_io": false, 00:42:58.660 "nvme_io_md": false, 00:42:58.660 "write_zeroes": true, 00:42:58.660 "zcopy": true, 00:42:58.660 "get_zone_info": false, 00:42:58.660 "zone_management": false, 00:42:58.660 "zone_append": false, 00:42:58.660 "compare": false, 00:42:58.660 "compare_and_write": false, 00:42:58.660 "abort": true, 00:42:58.660 "seek_hole": false, 00:42:58.660 "seek_data": false, 00:42:58.660 "copy": true, 00:42:58.660 "nvme_iov_md": false 00:42:58.660 }, 00:42:58.660 "memory_domains": [ 00:42:58.660 { 00:42:58.660 "dma_device_id": "system", 00:42:58.660 "dma_device_type": 1 00:42:58.660 }, 00:42:58.660 { 00:42:58.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:58.660 "dma_device_type": 2 00:42:58.660 } 00:42:58.660 ], 00:42:58.660 "driver_specific": {} 00:42:58.660 } 00:42:58.660 ] 00:42:58.660 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:58.660 04:38:07 blockdev_crypto_qat -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:42:58.660 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:58.918 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:42:58.918 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:58.918 04:38:07 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # cat 00:42:58.918 04:38:07 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:42:58.918 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:58.918 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:42:58.918 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:58.918 04:38:07 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:42:58.918 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:58.918 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:42:58.918 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:58.918 04:38:07 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:42:58.918 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:58.918 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:42:58.918 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:58.918 04:38:07 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:42:58.918 04:38:07 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:42:58.918 04:38:07 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:42:58.918 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:58.918 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:42:58.918 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:58.919 04:38:07 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:42:58.919 04:38:07 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # jq -r .name 00:42:58.919 04:38:07 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "9a1d8c44-5c5e-5fff-b582-0d3c7980d658"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9a1d8c44-5c5e-5fff-b582-0d3c7980d658",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "ca992b92-8ed7-50b4-be4c-9fddb0f14d74"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ca992b92-8ed7-50b4-be4c-9fddb0f14d74",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "8a955862-7a7e-51f1-bccf-a321e7f1d7de"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "8a955862-7a7e-51f1-bccf-a321e7f1d7de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "be9d4e37-2866-54de-b892-5dae98fc877c"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "be9d4e37-2866-54de-b892-5dae98fc877c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:42:58.919 04:38:07 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:42:58.919 04:38:07 blockdev_crypto_qat -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:42:58.919 04:38:07 blockdev_crypto_qat -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:42:58.919 04:38:07 blockdev_crypto_qat -- bdev/blockdev.sh@753 -- # killprocess 2924667 00:42:58.919 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@948 -- # '[' -z 2924667 ']' 00:42:58.919 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@952 -- # kill -0 2924667 00:42:58.919 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@953 -- # uname 00:42:58.919 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:58.919 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2924667 00:42:59.177 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:59.177 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:59.177 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2924667' 00:42:59.177 killing process with pid 2924667 00:42:59.177 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@967 -- # kill 2924667 00:42:59.177 04:38:07 blockdev_crypto_qat -- common/autotest_common.sh@972 -- # wait 2924667 00:43:03.362 04:38:11 blockdev_crypto_qat -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:43:03.362 04:38:11 blockdev_crypto_qat -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:43:03.362 04:38:11 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:43:03.362 04:38:11 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:03.362 04:38:11 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:43:03.362 ************************************ 00:43:03.362 START TEST bdev_hello_world 00:43:03.362 ************************************ 00:43:03.362 04:38:11 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:43:03.362 [2024-07-23 04:38:12.035875] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:43:03.362 [2024-07-23 04:38:12.035985] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2926254 ] 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3d:01.0 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3d:01.1 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3d:01.2 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3d:01.3 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3d:01.4 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3d:01.5 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3d:01.6 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3d:01.7 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3d:02.0 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3d:02.1 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3d:02.2 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3d:02.3 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3d:02.4 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3d:02.5 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3d:02.6 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3d:02.7 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3f:01.0 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3f:01.1 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3f:01.2 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3f:01.3 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3f:01.4 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3f:01.5 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3f:01.6 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3f:01.7 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3f:02.0 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3f:02.1 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3f:02.2 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3f:02.3 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3f:02.4 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3f:02.5 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3f:02.6 cannot be used 00:43:03.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.621 EAL: Requested device 0000:3f:02.7 cannot be used 00:43:03.621 [2024-07-23 04:38:12.259663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:03.880 [2024-07-23 04:38:12.536754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:03.880 [2024-07-23 04:38:12.558532] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:43:03.880 [2024-07-23 04:38:12.566559] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:43:03.880 [2024-07-23 04:38:12.574566] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:43:04.445 [2024-07-23 04:38:12.968025] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:43:07.725 [2024-07-23 04:38:15.790614] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:43:07.725 [2024-07-23 04:38:15.790693] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:43:07.725 [2024-07-23 04:38:15.790712] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:07.725 [2024-07-23 04:38:15.798636] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:43:07.725 [2024-07-23 04:38:15.798676] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:43:07.725 [2024-07-23 04:38:15.798693] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:07.725 [2024-07-23 04:38:15.806673] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:43:07.725 [2024-07-23 04:38:15.806706] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:43:07.725 [2024-07-23 04:38:15.806726] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:07.725 [2024-07-23 04:38:15.814666] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:43:07.725 [2024-07-23 04:38:15.814698] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:43:07.725 [2024-07-23 04:38:15.814713] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:07.725 [2024-07-23 04:38:16.078956] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:43:07.725 [2024-07-23 04:38:16.079003] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:43:07.725 [2024-07-23 04:38:16.079030] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:43:07.725 [2024-07-23 04:38:16.081314] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:43:07.725 [2024-07-23 04:38:16.081422] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:43:07.725 [2024-07-23 04:38:16.081446] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:43:07.725 [2024-07-23 04:38:16.081509] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:43:07.725 00:43:07.725 [2024-07-23 04:38:16.081534] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:43:10.256 00:43:10.256 real 0m6.660s 00:43:10.256 user 0m6.087s 00:43:10.256 sys 0m0.520s 00:43:10.256 04:38:18 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:10.256 04:38:18 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:43:10.256 ************************************ 00:43:10.256 END TEST bdev_hello_world 00:43:10.256 ************************************ 00:43:10.256 04:38:18 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:43:10.256 04:38:18 blockdev_crypto_qat -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:43:10.256 04:38:18 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:43:10.256 04:38:18 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:10.256 04:38:18 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:43:10.256 ************************************ 00:43:10.256 START TEST bdev_bounds 00:43:10.256 ************************************ 00:43:10.256 04:38:18 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:43:10.256 04:38:18 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=2927324 00:43:10.256 04:38:18 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:43:10.256 04:38:18 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:43:10.256 04:38:18 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 2927324' 00:43:10.256 Process bdevio pid: 2927324 00:43:10.256 04:38:18 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 2927324 00:43:10.256 04:38:18 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 2927324 ']' 00:43:10.256 04:38:18 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:10.256 04:38:18 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:10.256 04:38:18 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:10.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:10.256 04:38:18 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:10.256 04:38:18 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:43:10.256 [2024-07-23 04:38:18.780054] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:43:10.256 [2024-07-23 04:38:18.780181] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2927324 ] 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3d:01.0 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3d:01.1 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3d:01.2 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3d:01.3 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3d:01.4 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3d:01.5 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3d:01.6 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3d:01.7 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3d:02.0 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3d:02.1 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3d:02.2 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3d:02.3 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3d:02.4 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3d:02.5 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3d:02.6 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3d:02.7 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3f:01.0 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3f:01.1 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3f:01.2 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3f:01.3 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3f:01.4 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3f:01.5 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3f:01.6 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3f:01.7 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3f:02.0 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3f:02.1 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3f:02.2 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3f:02.3 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3f:02.4 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3f:02.5 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3f:02.6 cannot be used 00:43:10.256 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:10.256 EAL: Requested device 0000:3f:02.7 cannot be used 00:43:10.256 [2024-07-23 04:38:19.006715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:10.514 [2024-07-23 04:38:19.283223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:10.514 [2024-07-23 04:38:19.283285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:43:10.514 [2024-07-23 04:38:19.283281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:10.772 [2024-07-23 04:38:19.305475] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:43:10.772 [2024-07-23 04:38:19.313493] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:43:10.772 [2024-07-23 04:38:19.321518] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:43:11.031 [2024-07-23 04:38:19.711332] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:43:14.313 [2024-07-23 04:38:22.542584] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:43:14.313 [2024-07-23 04:38:22.542655] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:43:14.313 [2024-07-23 04:38:22.542683] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:14.313 [2024-07-23 04:38:22.550599] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:43:14.313 [2024-07-23 04:38:22.550636] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:43:14.313 [2024-07-23 04:38:22.550652] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:14.313 [2024-07-23 04:38:22.558655] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:43:14.313 [2024-07-23 04:38:22.558688] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:43:14.313 [2024-07-23 04:38:22.558703] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:14.313 [2024-07-23 04:38:22.566650] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:43:14.313 [2024-07-23 04:38:22.566702] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:43:14.313 [2024-07-23 04:38:22.566718] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:14.571 04:38:23 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:14.571 04:38:23 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:43:14.571 04:38:23 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:43:14.571 I/O targets: 00:43:14.571 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:43:14.571 crypto_ram1: 65536 blocks of 512 bytes (32 MiB) 00:43:14.571 crypto_ram2: 8192 blocks of 4096 bytes (32 MiB) 00:43:14.571 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:43:14.571 00:43:14.571 00:43:14.571 CUnit - A unit testing framework for C - Version 2.1-3 00:43:14.571 http://cunit.sourceforge.net/ 00:43:14.571 00:43:14.571 00:43:14.571 Suite: bdevio tests on: crypto_ram3 00:43:14.571 Test: blockdev write read block ...passed 00:43:14.571 Test: blockdev write zeroes read block ...passed 00:43:14.571 Test: blockdev write zeroes read no split ...passed 00:43:14.572 Test: blockdev write zeroes read split ...passed 00:43:14.572 Test: blockdev write zeroes read split partial ...passed 00:43:14.572 Test: blockdev reset ...passed 00:43:14.572 Test: blockdev write read 8 blocks ...passed 00:43:14.572 Test: blockdev write read size > 128k ...passed 00:43:14.572 Test: blockdev write read invalid size ...passed 00:43:14.572 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:14.572 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:14.572 Test: blockdev write read max offset ...passed 00:43:14.572 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:14.572 Test: blockdev writev readv 8 blocks ...passed 00:43:14.572 Test: blockdev writev readv 30 x 1block ...passed 00:43:14.572 Test: blockdev writev readv block ...passed 00:43:14.572 Test: blockdev writev readv size > 128k ...passed 00:43:14.572 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:14.572 Test: blockdev comparev and writev ...passed 00:43:14.572 Test: blockdev nvme passthru rw ...passed 00:43:14.572 Test: blockdev nvme passthru vendor specific ...passed 00:43:14.572 Test: blockdev nvme admin passthru ...passed 00:43:14.572 Test: blockdev copy ...passed 00:43:14.572 Suite: bdevio tests on: crypto_ram2 00:43:14.572 Test: blockdev write read block ...passed 00:43:14.572 Test: blockdev write zeroes read block ...passed 00:43:14.572 Test: blockdev write zeroes read no split ...passed 00:43:14.830 Test: blockdev write zeroes read split ...passed 00:43:14.830 Test: blockdev write zeroes read split partial ...passed 00:43:14.830 Test: blockdev reset ...passed 00:43:14.830 Test: blockdev write read 8 blocks ...passed 00:43:14.830 Test: blockdev write read size > 128k ...passed 00:43:14.830 Test: blockdev write read invalid size ...passed 00:43:14.830 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:14.830 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:14.830 Test: blockdev write read max offset ...passed 00:43:14.830 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:14.830 Test: blockdev writev readv 8 blocks ...passed 00:43:14.830 Test: blockdev writev readv 30 x 1block ...passed 00:43:14.830 Test: blockdev writev readv block ...passed 00:43:14.830 Test: blockdev writev readv size > 128k ...passed 00:43:14.830 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:14.830 Test: blockdev comparev and writev ...passed 00:43:14.830 Test: blockdev nvme passthru rw ...passed 00:43:14.830 Test: blockdev nvme passthru vendor specific ...passed 00:43:14.830 Test: blockdev nvme admin passthru ...passed 00:43:14.830 Test: blockdev copy ...passed 00:43:14.830 Suite: bdevio tests on: crypto_ram1 00:43:14.830 Test: blockdev write read block ...passed 00:43:14.830 Test: blockdev write zeroes read block ...passed 00:43:14.830 Test: blockdev write zeroes read no split ...passed 00:43:14.830 Test: blockdev write zeroes read split ...passed 00:43:15.088 Test: blockdev write zeroes read split partial ...passed 00:43:15.088 Test: blockdev reset ...passed 00:43:15.088 Test: blockdev write read 8 blocks ...passed 00:43:15.088 Test: blockdev write read size > 128k ...passed 00:43:15.088 Test: blockdev write read invalid size ...passed 00:43:15.088 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:15.088 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:15.088 Test: blockdev write read max offset ...passed 00:43:15.088 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:15.088 Test: blockdev writev readv 8 blocks ...passed 00:43:15.088 Test: blockdev writev readv 30 x 1block ...passed 00:43:15.088 Test: blockdev writev readv block ...passed 00:43:15.088 Test: blockdev writev readv size > 128k ...passed 00:43:15.088 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:15.088 Test: blockdev comparev and writev ...passed 00:43:15.088 Test: blockdev nvme passthru rw ...passed 00:43:15.088 Test: blockdev nvme passthru vendor specific ...passed 00:43:15.088 Test: blockdev nvme admin passthru ...passed 00:43:15.088 Test: blockdev copy ...passed 00:43:15.088 Suite: bdevio tests on: crypto_ram 00:43:15.088 Test: blockdev write read block ...passed 00:43:15.088 Test: blockdev write zeroes read block ...passed 00:43:15.088 Test: blockdev write zeroes read no split ...passed 00:43:15.088 Test: blockdev write zeroes read split ...passed 00:43:15.088 Test: blockdev write zeroes read split partial ...passed 00:43:15.088 Test: blockdev reset ...passed 00:43:15.088 Test: blockdev write read 8 blocks ...passed 00:43:15.088 Test: blockdev write read size > 128k ...passed 00:43:15.088 Test: blockdev write read invalid size ...passed 00:43:15.088 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:15.088 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:15.088 Test: blockdev write read max offset ...passed 00:43:15.088 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:15.088 Test: blockdev writev readv 8 blocks ...passed 00:43:15.088 Test: blockdev writev readv 30 x 1block ...passed 00:43:15.088 Test: blockdev writev readv block ...passed 00:43:15.088 Test: blockdev writev readv size > 128k ...passed 00:43:15.088 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:15.088 Test: blockdev comparev and writev ...passed 00:43:15.088 Test: blockdev nvme passthru rw ...passed 00:43:15.088 Test: blockdev nvme passthru vendor specific ...passed 00:43:15.088 Test: blockdev nvme admin passthru ...passed 00:43:15.088 Test: blockdev copy ...passed 00:43:15.088 00:43:15.088 Run Summary: Type Total Ran Passed Failed Inactive 00:43:15.088 suites 4 4 n/a 0 0 00:43:15.088 tests 92 92 92 0 0 00:43:15.088 asserts 520 520 520 0 n/a 00:43:15.088 00:43:15.088 Elapsed time = 1.515 seconds 00:43:15.088 0 00:43:15.088 04:38:23 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 2927324 00:43:15.088 04:38:23 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 2927324 ']' 00:43:15.088 04:38:23 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 2927324 00:43:15.088 04:38:23 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:43:15.088 04:38:23 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:15.088 04:38:23 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2927324 00:43:15.347 04:38:23 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:15.347 04:38:23 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:15.347 04:38:23 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2927324' 00:43:15.347 killing process with pid 2927324 00:43:15.347 04:38:23 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@967 -- # kill 2927324 00:43:15.347 04:38:23 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@972 -- # wait 2927324 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:43:17.939 00:43:17.939 real 0m7.695s 00:43:17.939 user 0m20.913s 00:43:17.939 sys 0m0.754s 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:43:17.939 ************************************ 00:43:17.939 END TEST bdev_bounds 00:43:17.939 ************************************ 00:43:17.939 04:38:26 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:43:17.939 04:38:26 blockdev_crypto_qat -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:43:17.939 04:38:26 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:43:17.939 04:38:26 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:17.939 04:38:26 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:43:17.939 ************************************ 00:43:17.939 START TEST bdev_nbd 00:43:17.939 ************************************ 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=4 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=4 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=2928581 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 2928581 /var/tmp/spdk-nbd.sock 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 2928581 ']' 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:43:17.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:17.939 04:38:26 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:43:17.939 [2024-07-23 04:38:26.573250] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:43:17.939 [2024-07-23 04:38:26.573369] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3d:01.0 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3d:01.1 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3d:01.2 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3d:01.3 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3d:01.4 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3d:01.5 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3d:01.6 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3d:01.7 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3d:02.0 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3d:02.1 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3d:02.2 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3d:02.3 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3d:02.4 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3d:02.5 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3d:02.6 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3d:02.7 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3f:01.0 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3f:01.1 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3f:01.2 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3f:01.3 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3f:01.4 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3f:01.5 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3f:01.6 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3f:01.7 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3f:02.0 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3f:02.1 cannot be used 00:43:17.939 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.939 EAL: Requested device 0000:3f:02.2 cannot be used 00:43:17.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.940 EAL: Requested device 0000:3f:02.3 cannot be used 00:43:17.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.940 EAL: Requested device 0000:3f:02.4 cannot be used 00:43:17.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.940 EAL: Requested device 0000:3f:02.5 cannot be used 00:43:17.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.940 EAL: Requested device 0000:3f:02.6 cannot be used 00:43:17.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.940 EAL: Requested device 0000:3f:02.7 cannot be used 00:43:18.197 [2024-07-23 04:38:26.801184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:18.455 [2024-07-23 04:38:27.087774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:18.455 [2024-07-23 04:38:27.109541] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:43:18.455 [2024-07-23 04:38:27.117569] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:43:18.455 [2024-07-23 04:38:27.125599] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:43:18.713 [2024-07-23 04:38:27.489948] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:43:21.996 [2024-07-23 04:38:30.297099] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:43:21.996 [2024-07-23 04:38:30.297187] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:43:21.996 [2024-07-23 04:38:30.297207] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:21.996 [2024-07-23 04:38:30.305114] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:43:21.996 [2024-07-23 04:38:30.305164] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:43:21.996 [2024-07-23 04:38:30.305181] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:21.996 [2024-07-23 04:38:30.313179] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:43:21.996 [2024-07-23 04:38:30.313214] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:43:21.996 [2024-07-23 04:38:30.313230] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:21.996 [2024-07-23 04:38:30.321147] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:43:21.996 [2024-07-23 04:38:30.321181] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:43:21.996 [2024-07-23 04:38:30.321196] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:22.255 04:38:30 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:22.255 04:38:30 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:43:22.255 04:38:30 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:43:22.255 04:38:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:22.255 04:38:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:43:22.255 04:38:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:43:22.255 04:38:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:43:22.255 04:38:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:22.255 04:38:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:43:22.255 04:38:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:43:22.255 04:38:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:43:22.255 04:38:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:43:22.255 04:38:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:43:22.255 04:38:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:43:22.255 04:38:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:43:22.514 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:43:22.514 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:43:22.514 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:43:22.514 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:43:22.514 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:43:22.514 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:43:22.514 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:43:22.514 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:43:22.514 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:43:22.514 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:43:22.514 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:43:22.514 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:22.514 1+0 records in 00:43:22.514 1+0 records out 00:43:22.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348705 s, 11.7 MB/s 00:43:22.514 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:22.514 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:43:22.514 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:22.514 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:43:22.514 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:43:22.514 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:22.514 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:43:22.514 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 00:43:22.772 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:43:22.773 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:43:22.773 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:43:22.773 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:43:22.773 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:43:22.773 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:43:22.773 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:43:22.773 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:43:22.773 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:43:22.773 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:43:22.773 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:43:22.773 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:22.773 1+0 records in 00:43:22.773 1+0 records out 00:43:22.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366876 s, 11.2 MB/s 00:43:22.773 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:22.773 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:43:22.773 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:22.773 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:43:22.773 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:43:22.773 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:22.773 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:43:22.773 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:43:23.031 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:43:23.031 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:43:23.031 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:43:23.031 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:43:23.031 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:43:23.031 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:43:23.031 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:43:23.031 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:43:23.031 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:43:23.031 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:43:23.031 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:43:23.031 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:23.031 1+0 records in 00:43:23.031 1+0 records out 00:43:23.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376323 s, 10.9 MB/s 00:43:23.031 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:23.032 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:43:23.032 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:23.032 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:43:23.032 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:43:23.032 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:23.032 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:43:23.032 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:43:23.290 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:43:23.290 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:43:23.290 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:43:23.290 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:43:23.290 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:43:23.290 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:43:23.290 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:43:23.290 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:43:23.290 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:43:23.290 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:43:23.290 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:43:23.290 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:23.290 1+0 records in 00:43:23.290 1+0 records out 00:43:23.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413521 s, 9.9 MB/s 00:43:23.290 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:23.290 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:43:23.290 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:23.290 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:43:23.290 04:38:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:43:23.290 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:23.290 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:43:23.290 04:38:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:23.549 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:43:23.549 { 00:43:23.549 "nbd_device": "/dev/nbd0", 00:43:23.549 "bdev_name": "crypto_ram" 00:43:23.549 }, 00:43:23.549 { 00:43:23.549 "nbd_device": "/dev/nbd1", 00:43:23.549 "bdev_name": "crypto_ram1" 00:43:23.549 }, 00:43:23.549 { 00:43:23.549 "nbd_device": "/dev/nbd2", 00:43:23.549 "bdev_name": "crypto_ram2" 00:43:23.549 }, 00:43:23.549 { 00:43:23.549 "nbd_device": "/dev/nbd3", 00:43:23.549 "bdev_name": "crypto_ram3" 00:43:23.549 } 00:43:23.549 ]' 00:43:23.549 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:43:23.549 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:43:23.549 { 00:43:23.549 "nbd_device": "/dev/nbd0", 00:43:23.549 "bdev_name": "crypto_ram" 00:43:23.549 }, 00:43:23.549 { 00:43:23.549 "nbd_device": "/dev/nbd1", 00:43:23.549 "bdev_name": "crypto_ram1" 00:43:23.549 }, 00:43:23.549 { 00:43:23.549 "nbd_device": "/dev/nbd2", 00:43:23.549 "bdev_name": "crypto_ram2" 00:43:23.549 }, 00:43:23.549 { 00:43:23.549 "nbd_device": "/dev/nbd3", 00:43:23.549 "bdev_name": "crypto_ram3" 00:43:23.549 } 00:43:23.549 ]' 00:43:23.549 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:43:23.549 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:43:23.549 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:23.549 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:43:23.549 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:23.549 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:23.549 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:23.549 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:23.807 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:23.807 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:23.808 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:23.808 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:23.808 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:23.808 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:23.808 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:23.808 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:23.808 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:23.808 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:43:24.065 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:43:24.065 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:43:24.065 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:43:24.065 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:24.065 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:24.065 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:43:24.065 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:24.065 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:24.065 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:24.065 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:43:24.322 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:43:24.322 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:43:24.322 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:43:24.322 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:24.322 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:24.322 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:43:24.322 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:24.322 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:24.322 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:24.322 04:38:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:43:24.579 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:43:24.579 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:43:24.579 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:43:24.579 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:24.579 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:24.579 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:43:24.579 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:24.579 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:24.579 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:24.579 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:24.579 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:43:24.836 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:43:25.093 /dev/nbd0 00:43:25.093 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:43:25.093 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:43:25.093 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:43:25.093 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:43:25.093 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:43:25.093 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:43:25.093 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:43:25.093 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:43:25.093 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:43:25.093 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:43:25.093 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:25.093 1+0 records in 00:43:25.093 1+0 records out 00:43:25.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302729 s, 13.5 MB/s 00:43:25.093 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:25.093 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:43:25.093 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:25.093 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:43:25.093 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:43:25.093 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:25.093 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:43:25.093 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 /dev/nbd1 00:43:25.350 /dev/nbd1 00:43:25.350 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:43:25.350 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:43:25.350 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:43:25.350 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:43:25.350 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:43:25.350 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:43:25.350 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:43:25.350 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:43:25.350 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:43:25.350 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:43:25.350 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:25.350 1+0 records in 00:43:25.350 1+0 records out 00:43:25.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269464 s, 15.2 MB/s 00:43:25.350 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:25.350 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:43:25.350 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:25.350 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:43:25.350 04:38:33 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:43:25.350 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:25.350 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:43:25.350 04:38:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd10 00:43:25.607 /dev/nbd10 00:43:25.607 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:43:25.607 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:43:25.607 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:43:25.607 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:43:25.607 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:43:25.607 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:43:25.607 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:43:25.607 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:43:25.607 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:43:25.607 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:43:25.607 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:25.607 1+0 records in 00:43:25.607 1+0 records out 00:43:25.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035973 s, 11.4 MB/s 00:43:25.607 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:25.607 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:43:25.607 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:25.607 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:43:25.607 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:43:25.607 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:25.607 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:43:25.607 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd11 00:43:25.864 /dev/nbd11 00:43:25.865 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:43:25.865 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:43:25.865 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:43:25.865 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:43:25.865 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:43:25.865 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:43:25.865 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:43:25.865 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:43:25.865 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:43:25.865 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:43:25.865 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:25.865 1+0 records in 00:43:25.865 1+0 records out 00:43:25.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362297 s, 11.3 MB/s 00:43:25.865 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:25.865 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:43:25.865 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:25.865 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:43:25.865 04:38:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:43:25.865 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:25.865 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:43:25.865 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:25.865 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:25.865 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:26.122 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:43:26.122 { 00:43:26.122 "nbd_device": "/dev/nbd0", 00:43:26.122 "bdev_name": "crypto_ram" 00:43:26.122 }, 00:43:26.122 { 00:43:26.122 "nbd_device": "/dev/nbd1", 00:43:26.122 "bdev_name": "crypto_ram1" 00:43:26.122 }, 00:43:26.122 { 00:43:26.122 "nbd_device": "/dev/nbd10", 00:43:26.122 "bdev_name": "crypto_ram2" 00:43:26.122 }, 00:43:26.122 { 00:43:26.122 "nbd_device": "/dev/nbd11", 00:43:26.122 "bdev_name": "crypto_ram3" 00:43:26.122 } 00:43:26.122 ]' 00:43:26.122 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:43:26.122 { 00:43:26.122 "nbd_device": "/dev/nbd0", 00:43:26.122 "bdev_name": "crypto_ram" 00:43:26.122 }, 00:43:26.122 { 00:43:26.122 "nbd_device": "/dev/nbd1", 00:43:26.122 "bdev_name": "crypto_ram1" 00:43:26.122 }, 00:43:26.122 { 00:43:26.122 "nbd_device": "/dev/nbd10", 00:43:26.122 "bdev_name": "crypto_ram2" 00:43:26.122 }, 00:43:26.122 { 00:43:26.122 "nbd_device": "/dev/nbd11", 00:43:26.122 "bdev_name": "crypto_ram3" 00:43:26.122 } 00:43:26.122 ]' 00:43:26.122 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:26.122 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:43:26.122 /dev/nbd1 00:43:26.122 /dev/nbd10 00:43:26.122 /dev/nbd11' 00:43:26.122 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:43:26.122 /dev/nbd1 00:43:26.122 /dev/nbd10 00:43:26.122 /dev/nbd11' 00:43:26.122 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:26.122 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:43:26.122 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:43:26.122 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:43:26.122 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:43:26.122 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:43:26.122 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:43:26.122 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:43:26.122 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:43:26.122 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:43:26.122 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:43:26.122 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:43:26.122 256+0 records in 00:43:26.122 256+0 records out 00:43:26.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101987 s, 103 MB/s 00:43:26.122 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:26.122 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:43:26.122 256+0 records in 00:43:26.122 256+0 records out 00:43:26.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0742206 s, 14.1 MB/s 00:43:26.122 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:26.122 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:43:26.379 256+0 records in 00:43:26.379 256+0 records out 00:43:26.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0696231 s, 15.1 MB/s 00:43:26.379 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:26.379 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:43:26.379 256+0 records in 00:43:26.379 256+0 records out 00:43:26.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0398049 s, 26.3 MB/s 00:43:26.379 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:26.379 04:38:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:43:26.379 256+0 records in 00:43:26.379 256+0 records out 00:43:26.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0367581 s, 28.5 MB/s 00:43:26.379 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:43:26.379 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:43:26.379 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:43:26.379 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:43:26.379 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:43:26.379 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:43:26.379 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:43:26.379 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:26.379 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:43:26.379 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:26.379 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:43:26.379 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:26.379 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:43:26.379 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:26.379 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:43:26.379 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:43:26.379 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:43:26.379 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:26.379 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:43:26.379 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:26.379 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:26.379 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:26.379 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:26.637 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:26.637 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:26.637 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:26.637 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:26.637 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:26.637 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:26.637 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:26.637 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:26.637 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:26.637 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:43:26.895 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:43:26.895 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:43:26.895 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:43:26.895 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:26.895 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:26.895 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:43:26.895 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:26.895 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:26.895 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:26.895 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:43:27.153 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:43:27.153 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:43:27.153 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:43:27.153 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:27.153 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:27.153 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:43:27.153 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:27.153 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:27.153 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:27.153 04:38:35 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:43:27.410 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:43:27.410 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:43:27.410 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:43:27.410 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:27.410 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:27.410 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:43:27.410 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:27.410 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:27.410 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:27.410 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:27.410 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:27.668 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:43:27.668 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:43:27.668 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:27.668 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:43:27.668 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:43:27.668 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:27.668 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:43:27.668 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:43:27.668 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:43:27.668 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:43:27.668 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:43:27.668 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:43:27.668 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:43:27.668 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:27.668 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:43:27.668 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:43:27.668 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:43:27.668 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:43:27.925 malloc_lvol_verify 00:43:27.925 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:43:28.183 78219b8d-c0e1-4b2c-8105-ad44d44fee17 00:43:28.183 04:38:36 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:43:28.440 d6e59d19-94cf-4400-ab9c-81db201d86f8 00:43:28.440 04:38:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:43:28.696 /dev/nbd0 00:43:28.696 04:38:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:43:28.696 mke2fs 1.46.5 (30-Dec-2021) 00:43:28.696 Discarding device blocks: 0/4096 done 00:43:28.696 Creating filesystem with 4096 1k blocks and 1024 inodes 00:43:28.696 00:43:28.696 Allocating group tables: 0/1 done 00:43:28.696 Writing inode tables: 0/1 done 00:43:28.696 Creating journal (1024 blocks): done 00:43:28.696 Writing superblocks and filesystem accounting information: 0/1 done 00:43:28.696 00:43:28.696 04:38:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:43:28.696 04:38:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:28.696 04:38:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:28.696 04:38:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:43:28.696 04:38:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:28.696 04:38:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:28.696 04:38:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:28.696 04:38:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:28.953 04:38:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:28.953 04:38:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:28.953 04:38:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:28.953 04:38:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:28.953 04:38:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:28.953 04:38:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:28.953 04:38:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:28.953 04:38:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:28.953 04:38:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:43:28.953 04:38:37 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:43:28.953 04:38:37 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 2928581 00:43:28.953 04:38:37 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 2928581 ']' 00:43:28.953 04:38:37 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 2928581 00:43:28.953 04:38:37 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:43:28.953 04:38:37 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:28.953 04:38:37 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2928581 00:43:28.954 04:38:37 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:28.954 04:38:37 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:28.954 04:38:37 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2928581' 00:43:28.954 killing process with pid 2928581 00:43:28.954 04:38:37 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@967 -- # kill 2928581 00:43:28.954 04:38:37 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@972 -- # wait 2928581 00:43:31.478 04:38:40 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:43:31.478 00:43:31.478 real 0m13.802s 00:43:31.478 user 0m16.767s 00:43:31.478 sys 0m4.009s 00:43:31.478 04:38:40 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:31.478 04:38:40 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:43:31.478 ************************************ 00:43:31.478 END TEST bdev_nbd 00:43:31.478 ************************************ 00:43:31.737 04:38:40 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:43:31.737 04:38:40 blockdev_crypto_qat -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:43:31.737 04:38:40 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # '[' crypto_qat = nvme ']' 00:43:31.737 04:38:40 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # '[' crypto_qat = gpt ']' 00:43:31.737 04:38:40 blockdev_crypto_qat -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:43:31.737 04:38:40 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:43:31.737 04:38:40 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:31.737 04:38:40 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:43:31.737 ************************************ 00:43:31.737 START TEST bdev_fio 00:43:31.737 ************************************ 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:43:31.737 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram1]' 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram1 00:43:31.737 04:38:40 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram2]' 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram2 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:43:31.738 ************************************ 00:43:31.738 START TEST bdev_fio_rw_verify 00:43:31.738 ************************************ 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:31.738 04:38:40 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:32.381 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:32.381 job_crypto_ram1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:32.381 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:32.381 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:32.381 fio-3.35 00:43:32.381 Starting 4 threads 00:43:32.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.381 EAL: Requested device 0000:3d:01.0 cannot be used 00:43:32.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.381 EAL: Requested device 0000:3d:01.1 cannot be used 00:43:32.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.381 EAL: Requested device 0000:3d:01.2 cannot be used 00:43:32.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.381 EAL: Requested device 0000:3d:01.3 cannot be used 00:43:32.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.381 EAL: Requested device 0000:3d:01.4 cannot be used 00:43:32.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.381 EAL: Requested device 0000:3d:01.5 cannot be used 00:43:32.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.381 EAL: Requested device 0000:3d:01.6 cannot be used 00:43:32.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.381 EAL: Requested device 0000:3d:01.7 cannot be used 00:43:32.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.381 EAL: Requested device 0000:3d:02.0 cannot be used 00:43:32.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.381 EAL: Requested device 0000:3d:02.1 cannot be used 00:43:32.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.381 EAL: Requested device 0000:3d:02.2 cannot be used 00:43:32.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.381 EAL: Requested device 0000:3d:02.3 cannot be used 00:43:32.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.381 EAL: Requested device 0000:3d:02.4 cannot be used 00:43:32.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.381 EAL: Requested device 0000:3d:02.5 cannot be used 00:43:32.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.381 EAL: Requested device 0000:3d:02.6 cannot be used 00:43:32.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.381 EAL: Requested device 0000:3d:02.7 cannot be used 00:43:32.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.381 EAL: Requested device 0000:3f:01.0 cannot be used 00:43:32.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.381 EAL: Requested device 0000:3f:01.1 cannot be used 00:43:32.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.381 EAL: Requested device 0000:3f:01.2 cannot be used 00:43:32.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.381 EAL: Requested device 0000:3f:01.3 cannot be used 00:43:32.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.381 EAL: Requested device 0000:3f:01.4 cannot be used 00:43:32.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.381 EAL: Requested device 0000:3f:01.5 cannot be used 00:43:32.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.382 EAL: Requested device 0000:3f:01.6 cannot be used 00:43:32.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.382 EAL: Requested device 0000:3f:01.7 cannot be used 00:43:32.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.382 EAL: Requested device 0000:3f:02.0 cannot be used 00:43:32.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.382 EAL: Requested device 0000:3f:02.1 cannot be used 00:43:32.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.382 EAL: Requested device 0000:3f:02.2 cannot be used 00:43:32.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.382 EAL: Requested device 0000:3f:02.3 cannot be used 00:43:32.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.382 EAL: Requested device 0000:3f:02.4 cannot be used 00:43:32.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.382 EAL: Requested device 0000:3f:02.5 cannot be used 00:43:32.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.382 EAL: Requested device 0000:3f:02.6 cannot be used 00:43:32.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:32.382 EAL: Requested device 0000:3f:02.7 cannot be used 00:43:47.257 00:43:47.257 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=2931877: Tue Jul 23 04:38:55 2024 00:43:47.257 read: IOPS=22.8k, BW=89.2MiB/s (93.6MB/s)(892MiB/10001msec) 00:43:47.257 slat (usec): min=19, max=354, avg=58.56, stdev=34.08 00:43:47.257 clat (usec): min=23, max=2052, avg=331.89, stdev=208.27 00:43:47.257 lat (usec): min=54, max=2298, avg=390.45, stdev=226.69 00:43:47.257 clat percentiles (usec): 00:43:47.257 | 50.000th=[ 269], 99.000th=[ 996], 99.900th=[ 1156], 99.990th=[ 1467], 00:43:47.257 | 99.999th=[ 2040] 00:43:47.257 write: IOPS=25.3k, BW=98.7MiB/s (103MB/s)(959MiB/9721msec); 0 zone resets 00:43:47.257 slat (usec): min=28, max=479, avg=71.16, stdev=34.15 00:43:47.257 clat (usec): min=29, max=2208, avg=372.84, stdev=222.58 00:43:47.257 lat (usec): min=77, max=2293, avg=444.00, stdev=240.70 00:43:47.257 clat percentiles (usec): 00:43:47.257 | 50.000th=[ 322], 99.000th=[ 1074], 99.900th=[ 1254], 99.990th=[ 1614], 00:43:47.257 | 99.999th=[ 1811] 00:43:47.257 bw ( KiB/s): min=84200, max=126648, per=97.20%, avg=98214.32, stdev=2504.49, samples=76 00:43:47.257 iops : min=21050, max=31662, avg=24553.58, stdev=626.12, samples=76 00:43:47.257 lat (usec) : 50=0.01%, 100=2.65%, 250=37.01%, 500=39.01%, 750=14.90% 00:43:47.257 lat (usec) : 1000=4.93% 00:43:47.257 lat (msec) : 2=1.49%, 4=0.01% 00:43:47.257 cpu : usr=99.29%, sys=0.23%, ctx=68, majf=0, minf=24477 00:43:47.257 IO depths : 1=4.4%, 2=27.3%, 4=54.6%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:47.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:47.257 complete : 0=0.0%, 4=88.0%, 8=12.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:47.257 issued rwts: total=228460,245569,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:47.257 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:47.257 00:43:47.257 Run status group 0 (all jobs): 00:43:47.257 READ: bw=89.2MiB/s (93.6MB/s), 89.2MiB/s-89.2MiB/s (93.6MB/s-93.6MB/s), io=892MiB (936MB), run=10001-10001msec 00:43:47.257 WRITE: bw=98.7MiB/s (103MB/s), 98.7MiB/s-98.7MiB/s (103MB/s-103MB/s), io=959MiB (1006MB), run=9721-9721msec 00:43:49.158 ----------------------------------------------------- 00:43:49.158 Suppressions used: 00:43:49.158 count bytes template 00:43:49.158 4 47 /usr/src/fio/parse.c 00:43:49.158 2590 248640 /usr/src/fio/iolog.c 00:43:49.158 1 8 libtcmalloc_minimal.so 00:43:49.158 1 904 libcrypto.so 00:43:49.158 ----------------------------------------------------- 00:43:49.158 00:43:49.158 00:43:49.158 real 0m17.223s 00:43:49.158 user 0m56.999s 00:43:49.158 sys 0m0.960s 00:43:49.158 04:38:57 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:49.158 04:38:57 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:43:49.158 ************************************ 00:43:49.158 END TEST bdev_fio_rw_verify 00:43:49.158 ************************************ 00:43:49.158 04:38:57 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:43:49.158 04:38:57 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:43:49.158 04:38:57 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:43:49.158 04:38:57 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:43:49.158 04:38:57 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:43:49.158 04:38:57 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:43:49.158 04:38:57 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:43:49.158 04:38:57 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:43:49.158 04:38:57 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:43:49.158 04:38:57 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:43:49.158 04:38:57 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:43:49.158 04:38:57 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:43:49.158 04:38:57 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:43:49.158 04:38:57 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:43:49.158 04:38:57 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:43:49.158 04:38:57 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:43:49.158 04:38:57 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:43:49.158 04:38:57 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "9a1d8c44-5c5e-5fff-b582-0d3c7980d658"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9a1d8c44-5c5e-5fff-b582-0d3c7980d658",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "ca992b92-8ed7-50b4-be4c-9fddb0f14d74"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ca992b92-8ed7-50b4-be4c-9fddb0f14d74",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "8a955862-7a7e-51f1-bccf-a321e7f1d7de"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "8a955862-7a7e-51f1-bccf-a321e7f1d7de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "be9d4e37-2866-54de-b892-5dae98fc877c"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "be9d4e37-2866-54de-b892-5dae98fc877c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:43:49.159 crypto_ram1 00:43:49.159 crypto_ram2 00:43:49.159 crypto_ram3 ]] 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "9a1d8c44-5c5e-5fff-b582-0d3c7980d658"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9a1d8c44-5c5e-5fff-b582-0d3c7980d658",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "ca992b92-8ed7-50b4-be4c-9fddb0f14d74"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ca992b92-8ed7-50b4-be4c-9fddb0f14d74",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "8a955862-7a7e-51f1-bccf-a321e7f1d7de"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "8a955862-7a7e-51f1-bccf-a321e7f1d7de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "be9d4e37-2866-54de-b892-5dae98fc877c"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "be9d4e37-2866-54de-b892-5dae98fc877c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram1]' 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram1 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram2]' 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram2 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:43:49.159 ************************************ 00:43:49.159 START TEST bdev_fio_trim 00:43:49.159 ************************************ 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:43:49.159 04:38:57 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:49.160 04:38:57 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:49.725 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:49.725 job_crypto_ram1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:49.725 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:49.725 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:49.725 fio-3.35 00:43:49.725 Starting 4 threads 00:43:49.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.725 EAL: Requested device 0000:3d:01.0 cannot be used 00:43:49.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.725 EAL: Requested device 0000:3d:01.1 cannot be used 00:43:49.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.725 EAL: Requested device 0000:3d:01.2 cannot be used 00:43:49.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.725 EAL: Requested device 0000:3d:01.3 cannot be used 00:43:49.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.725 EAL: Requested device 0000:3d:01.4 cannot be used 00:43:49.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.725 EAL: Requested device 0000:3d:01.5 cannot be used 00:43:49.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.725 EAL: Requested device 0000:3d:01.6 cannot be used 00:43:49.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.725 EAL: Requested device 0000:3d:01.7 cannot be used 00:43:49.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.725 EAL: Requested device 0000:3d:02.0 cannot be used 00:43:49.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.725 EAL: Requested device 0000:3d:02.1 cannot be used 00:43:49.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.725 EAL: Requested device 0000:3d:02.2 cannot be used 00:43:49.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.725 EAL: Requested device 0000:3d:02.3 cannot be used 00:43:49.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.725 EAL: Requested device 0000:3d:02.4 cannot be used 00:43:49.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.725 EAL: Requested device 0000:3d:02.5 cannot be used 00:43:49.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.983 EAL: Requested device 0000:3d:02.6 cannot be used 00:43:49.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.983 EAL: Requested device 0000:3d:02.7 cannot be used 00:43:49.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.983 EAL: Requested device 0000:3f:01.0 cannot be used 00:43:49.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.983 EAL: Requested device 0000:3f:01.1 cannot be used 00:43:49.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.983 EAL: Requested device 0000:3f:01.2 cannot be used 00:43:49.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.983 EAL: Requested device 0000:3f:01.3 cannot be used 00:43:49.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.984 EAL: Requested device 0000:3f:01.4 cannot be used 00:43:49.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.984 EAL: Requested device 0000:3f:01.5 cannot be used 00:43:49.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.984 EAL: Requested device 0000:3f:01.6 cannot be used 00:43:49.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.984 EAL: Requested device 0000:3f:01.7 cannot be used 00:43:49.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.984 EAL: Requested device 0000:3f:02.0 cannot be used 00:43:49.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.984 EAL: Requested device 0000:3f:02.1 cannot be used 00:43:49.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.984 EAL: Requested device 0000:3f:02.2 cannot be used 00:43:49.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.984 EAL: Requested device 0000:3f:02.3 cannot be used 00:43:49.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.984 EAL: Requested device 0000:3f:02.4 cannot be used 00:43:49.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.984 EAL: Requested device 0000:3f:02.5 cannot be used 00:43:49.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.984 EAL: Requested device 0000:3f:02.6 cannot be used 00:43:49.984 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:49.984 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:04.852 00:44:04.852 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=2934782: Tue Jul 23 04:39:12 2024 00:44:04.852 write: IOPS=35.4k, BW=138MiB/s (145MB/s)(1383MiB/10001msec); 0 zone resets 00:44:04.852 slat (usec): min=19, max=560, avg=68.11, stdev=33.35 00:44:04.852 clat (usec): min=44, max=1108, avg=233.40, stdev=122.97 00:44:04.852 lat (usec): min=68, max=1265, avg=301.51, stdev=138.17 00:44:04.852 clat percentiles (usec): 00:44:04.852 | 50.000th=[ 215], 99.000th=[ 586], 99.900th=[ 701], 99.990th=[ 799], 00:44:04.852 | 99.999th=[ 955] 00:44:04.852 bw ( KiB/s): min=135232, max=149440, per=100.00%, avg=141761.68, stdev=1029.81, samples=76 00:44:04.852 iops : min=33808, max=37360, avg=35440.42, stdev=257.45, samples=76 00:44:04.852 trim: IOPS=35.4k, BW=138MiB/s (145MB/s)(1383MiB/10001msec); 0 zone resets 00:44:04.852 slat (usec): min=6, max=148, avg=17.69, stdev= 5.57 00:44:04.852 clat (usec): min=7, max=1266, avg=301.72, stdev=138.18 00:44:04.852 lat (usec): min=14, max=1300, avg=319.41, stdev=139.16 00:44:04.852 clat percentiles (usec): 00:44:04.852 | 50.000th=[ 277], 99.000th=[ 701], 99.900th=[ 816], 99.990th=[ 930], 00:44:04.852 | 99.999th=[ 1139] 00:44:04.852 bw ( KiB/s): min=135232, max=149440, per=100.00%, avg=141761.68, stdev=1029.81, samples=76 00:44:04.852 iops : min=33808, max=37360, avg=35440.42, stdev=257.45, samples=76 00:44:04.852 lat (usec) : 10=0.01%, 50=0.01%, 100=6.84%, 250=44.05%, 500=42.08% 00:44:04.852 lat (usec) : 750=6.81%, 1000=0.21% 00:44:04.852 lat (msec) : 2=0.01% 00:44:04.852 cpu : usr=99.50%, sys=0.07%, ctx=77, majf=0, minf=7681 00:44:04.852 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:04.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.852 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:04.852 issued rwts: total=0,354020,354021,0 short=0,0,0,0 dropped=0,0,0,0 00:44:04.852 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:04.852 00:44:04.852 Run status group 0 (all jobs): 00:44:04.852 WRITE: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=1383MiB (1450MB), run=10001-10001msec 00:44:04.852 TRIM: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=1383MiB (1450MB), run=10001-10001msec 00:44:06.224 ----------------------------------------------------- 00:44:06.224 Suppressions used: 00:44:06.224 count bytes template 00:44:06.224 4 47 /usr/src/fio/parse.c 00:44:06.224 1 8 libtcmalloc_minimal.so 00:44:06.224 1 904 libcrypto.so 00:44:06.224 ----------------------------------------------------- 00:44:06.224 00:44:06.224 00:44:06.224 real 0m17.050s 00:44:06.224 user 0m57.851s 00:44:06.224 sys 0m0.862s 00:44:06.224 04:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:06.224 04:39:14 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:44:06.224 ************************************ 00:44:06.224 END TEST bdev_fio_trim 00:44:06.224 ************************************ 00:44:06.224 04:39:14 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:44:06.225 04:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:44:06.225 04:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:44:06.225 04:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:44:06.225 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:44:06.225 04:39:14 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:44:06.225 00:44:06.225 real 0m34.637s 00:44:06.225 user 1m55.042s 00:44:06.225 sys 0m2.017s 00:44:06.225 04:39:14 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:06.225 04:39:14 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:44:06.225 ************************************ 00:44:06.225 END TEST bdev_fio 00:44:06.225 ************************************ 00:44:06.482 04:39:15 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:44:06.482 04:39:15 blockdev_crypto_qat -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:44:06.482 04:39:15 blockdev_crypto_qat -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:44:06.482 04:39:15 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:44:06.482 04:39:15 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:06.482 04:39:15 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:44:06.482 ************************************ 00:44:06.482 START TEST bdev_verify 00:44:06.482 ************************************ 00:44:06.482 04:39:15 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:44:06.482 [2024-07-23 04:39:15.172910] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:44:06.482 [2024-07-23 04:39:15.173028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2937325 ] 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:06.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:06.740 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:06.740 [2024-07-23 04:39:15.398187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:06.997 [2024-07-23 04:39:15.694148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:06.997 [2024-07-23 04:39:15.694152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:06.997 [2024-07-23 04:39:15.715951] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:44:06.997 [2024-07-23 04:39:15.723977] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:44:06.997 [2024-07-23 04:39:15.731986] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:44:07.561 [2024-07-23 04:39:16.129161] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:44:10.874 [2024-07-23 04:39:19.002332] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:44:10.874 [2024-07-23 04:39:19.002409] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:44:10.874 [2024-07-23 04:39:19.002432] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:10.874 [2024-07-23 04:39:19.010347] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:44:10.874 [2024-07-23 04:39:19.010384] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:44:10.875 [2024-07-23 04:39:19.010401] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:10.875 [2024-07-23 04:39:19.018391] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:44:10.875 [2024-07-23 04:39:19.018427] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:44:10.875 [2024-07-23 04:39:19.018443] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:10.875 [2024-07-23 04:39:19.026394] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:44:10.875 [2024-07-23 04:39:19.026427] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:44:10.875 [2024-07-23 04:39:19.026442] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:10.875 Running I/O for 5 seconds... 00:44:16.133 00:44:16.133 Latency(us) 00:44:16.133 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:16.133 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:16.133 Verification LBA range: start 0x0 length 0x1000 00:44:16.133 crypto_ram : 5.06 455.01 1.78 0.00 0.00 280749.74 7602.18 171966.46 00:44:16.133 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:16.133 Verification LBA range: start 0x1000 length 0x1000 00:44:16.133 crypto_ram : 5.07 454.54 1.78 0.00 0.00 281025.56 7497.32 171966.46 00:44:16.133 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:16.133 Verification LBA range: start 0x0 length 0x1000 00:44:16.133 crypto_ram1 : 5.06 454.91 1.78 0.00 0.00 280066.09 8178.89 171966.46 00:44:16.133 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:16.133 Verification LBA range: start 0x1000 length 0x1000 00:44:16.133 crypto_ram1 : 5.07 454.44 1.78 0.00 0.00 280298.20 8021.61 171966.46 00:44:16.133 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:16.133 Verification LBA range: start 0x0 length 0x1000 00:44:16.133 crypto_ram2 : 5.04 3529.14 13.79 0.00 0.00 35962.03 7969.18 31667.00 00:44:16.133 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:16.133 Verification LBA range: start 0x1000 length 0x1000 00:44:16.133 crypto_ram2 : 5.06 3531.39 13.79 0.00 0.00 35911.20 4220.52 31667.00 00:44:16.133 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:16.133 Verification LBA range: start 0x0 length 0x1000 00:44:16.133 crypto_ram3 : 5.06 3543.32 13.84 0.00 0.00 35743.94 3801.09 31037.85 00:44:16.133 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:16.133 Verification LBA range: start 0x1000 length 0x1000 00:44:16.133 crypto_ram3 : 5.06 3539.59 13.83 0.00 0.00 35766.22 3617.59 31037.85 00:44:16.133 =================================================================================================================== 00:44:16.133 Total : 15962.34 62.35 0.00 0.00 63787.90 3617.59 171966.46 00:44:18.659 00:44:18.659 real 0m12.114s 00:44:18.659 user 0m22.314s 00:44:18.659 sys 0m0.556s 00:44:18.659 04:39:27 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:18.659 04:39:27 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:44:18.659 ************************************ 00:44:18.659 END TEST bdev_verify 00:44:18.659 ************************************ 00:44:18.659 04:39:27 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:44:18.659 04:39:27 blockdev_crypto_qat -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:44:18.659 04:39:27 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:44:18.659 04:39:27 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:18.659 04:39:27 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:44:18.659 ************************************ 00:44:18.659 START TEST bdev_verify_big_io 00:44:18.659 ************************************ 00:44:18.659 04:39:27 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:44:18.659 [2024-07-23 04:39:27.372578] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:44:18.659 [2024-07-23 04:39:27.372703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2939246 ] 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:18.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.917 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:18.917 [2024-07-23 04:39:27.597934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:19.176 [2024-07-23 04:39:27.890368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:19.176 [2024-07-23 04:39:27.890373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:19.176 [2024-07-23 04:39:27.912197] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:44:19.176 [2024-07-23 04:39:27.920220] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:44:19.176 [2024-07-23 04:39:27.928242] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:44:19.740 [2024-07-23 04:39:28.323630] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:44:23.024 [2024-07-23 04:39:31.174443] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:44:23.024 [2024-07-23 04:39:31.174515] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:44:23.024 [2024-07-23 04:39:31.174537] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:23.024 [2024-07-23 04:39:31.182456] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:44:23.024 [2024-07-23 04:39:31.182494] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:44:23.024 [2024-07-23 04:39:31.182510] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:23.024 [2024-07-23 04:39:31.190495] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:44:23.024 [2024-07-23 04:39:31.190530] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:44:23.024 [2024-07-23 04:39:31.190545] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:23.024 [2024-07-23 04:39:31.198500] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:44:23.024 [2024-07-23 04:39:31.198533] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:44:23.024 [2024-07-23 04:39:31.198547] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:23.024 Running I/O for 5 seconds... 00:44:23.593 [2024-07-23 04:39:32.261183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.261604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.261989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.262370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.262441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.262490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.262539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.262585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.263012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.263043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.263061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.263078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.266481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.266545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.266593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.266639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.267101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.267160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.267232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.267280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.267678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.267700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.267718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.267736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.271044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.271105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.271161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.271208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.271666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.271717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.271763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.271809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.272185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.272209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.272227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.272244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.275832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.275892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.275957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.276015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.276511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.276574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.276621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.276668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.277095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.277116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.277133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.277158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.280327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.280388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.280435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.280480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.280918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.280969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.281027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.281073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.281509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.281532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.281551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.281569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.284875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.284935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.284982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.285032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.285494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.285546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.593 [2024-07-23 04:39:32.285593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.285639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.286074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.286096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.286115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.286134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.289276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.289337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.289382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.289428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.289877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.289928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.289976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.290022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.290460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.290484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.290501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.290520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.293673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.293732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.293780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.293826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.294304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.294355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.294402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.294448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.294813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.294839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.294856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.294874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.297951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.298010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.298056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.298102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.298580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.298632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.298677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.298723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.299161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.299185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.299202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.299221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.302379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.302438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.302484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.302530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.302979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.303030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.303077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.303123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.303568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.303591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.303608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.303626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.306668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.306728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.306773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.306819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.307304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.307356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.307402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.307447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.307860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.307882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.307899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.307917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.310925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.310985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.311030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.311076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.311534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.311585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.311632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.311678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.312059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.312081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.312098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.312115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.315114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.315181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.315244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.315290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.315754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.315817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.594 [2024-07-23 04:39:32.315862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.315920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.316326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.316348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.316370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.316387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.319551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.319612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.319657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.319704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.320111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.320172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.320219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.320264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.320629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.320651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.320668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.320686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.323823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.323889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.323947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.324012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.324463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.324527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.324583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.324629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.324997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.325018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.325035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.325052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.328059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.328119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.328203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.328250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.328702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.328758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.328804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.328848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.329279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.329302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.329319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.329337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.332277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.332336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.332383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.332429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.332939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.332990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.333037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.333083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.333525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.333548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.333566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.333583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.336431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.336489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.336534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.336579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.337040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.337090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.337137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.337192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.337576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.337598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.337615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.337636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.340483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.340542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.340587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.340633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.341102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.341161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.341221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.341268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.341702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.341726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.341743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.341761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.344757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.344817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.344862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.344907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.345415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.345466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.595 [2024-07-23 04:39:32.345512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.345558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.345988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.346011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.346028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.346046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.348812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.348871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.348916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.348961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.349433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.349485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.349535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.349581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.349973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.349994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.350011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.350028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.352944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.353003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.353051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.353099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.353583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.353646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.353691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.353750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.354155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.354177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.354194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.354211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.357096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.357163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.357209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.357256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.357667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.357718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.357763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.357809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.358214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.358237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.358266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.358282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.361130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.361213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.361259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.361304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.361741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.361790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.361836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.361882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.362332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.362355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.362373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.362391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.365152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.365216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.365276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.365335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.365744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.365794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.365839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.365885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.366326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.366350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.366367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.366385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.369219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.369290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.369335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.369381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.369833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.369884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.369929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.369979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.370396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.370418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.370438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.370457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.373230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.373290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.373335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.373381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.373842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.596 [2024-07-23 04:39:32.373895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.373941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.373988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.374429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.374453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.374471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.374488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.377249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.377309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.377355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.377400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.377866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.377916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.377962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.378009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.378444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.378467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.378484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.378504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.381203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.381262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.381310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.381356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.381830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.381881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.857 [2024-07-23 04:39:32.381927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.381974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.382344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.382366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.382383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.382400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.384569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.384632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.384677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.384722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.385081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.385129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.385182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.385227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.385535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.385555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.385571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.385587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.388230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.388288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.388335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.388380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.388765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.388814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.388859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.388903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.389250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.389271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.389288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.389305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.391339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.391398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.391443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.391480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.391829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.391879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.391924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.391968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.392370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.392391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.392409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.392426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.396268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.397707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.399148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.400247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.401766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.403184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.404589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.405187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.405657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.405679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.405697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.405714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.409689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.411131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.412427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.413573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.415369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.416826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.417567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.417974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.418423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.418447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.418464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.418482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.422325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.423691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.424717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.425919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.427677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.428587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.428984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.429384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.429795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.429817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.429834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.429851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.433531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.434347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.435519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.436952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.438407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.438810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.439211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.439606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.858 [2024-07-23 04:39:32.440027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.440056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.440074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.440092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.442668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.443861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.445315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.446764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.447501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.447902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.448304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.448702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.449137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.449167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.449186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.449203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.452638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.454192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.455663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.457034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.457833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.458241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.458635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.459027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.459373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.459395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.459412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.459429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.462697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.464176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.465759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.466170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.466983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.467394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.467790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.468950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.469313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.469341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.469358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.469375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.472731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.474184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.474595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.474994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.475836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.476245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.477292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.478467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.478789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.478810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.478827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.478844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.482273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.482698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.483093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.483499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.484330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.485298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.486468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.487892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.488223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.488246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.488262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.488287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.490900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.491317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.491712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.492105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.493288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.494431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.495886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.497328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.497691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.497714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.497732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.497750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.500093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.500509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.500906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.501310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.502836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.504258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.505677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.506681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.507007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.859 [2024-07-23 04:39:32.507028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.507045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.507063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.509531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.509934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.510343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.511132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.512898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.514355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.515507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.516795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.517146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.517168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.517186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.517203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.519741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.520155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.520945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.522095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.523837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.525049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.526267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.527465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.527787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.527808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.527825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.527842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.530510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.531076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.532240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.533690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.535428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.536445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.537624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.539057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.539387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.539409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.539426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.539443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.542352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.543624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.545064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.546520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.547737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.548919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.550372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.551824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.552221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.552244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.552261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.552279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.556300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.557881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.559341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.560667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.562176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.563628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.565076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.565910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.566379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.566402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.566421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.566439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.570249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.571773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.573193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.574194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.575939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.577396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.578329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.578727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.579174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.579198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.579216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.579234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.582817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.584417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.585224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.586378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.588126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.589301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.589695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.590088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.590504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.590527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.860 [2024-07-23 04:39:32.590545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.590562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.594034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.594669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.595869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.597323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.598946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.599354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.599753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.600154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.600596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.600618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.600636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.600654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.603155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.604459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.605935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.607383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.608101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.608509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.608905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.609310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.609742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.609765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.609782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.609800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.613219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.614823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.616308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.617708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.618511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.618912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.619312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.619706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.620027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.620049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.620068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.620085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.623221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.624687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.626053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.626465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.627307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.627718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.628112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.629527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.629886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.629908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.629925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.629942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.633205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.634652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.635100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.635505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.636330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.636730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.861 [2024-07-23 04:39:32.637981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.639147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.639467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.639488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.639517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.639533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.642813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.643502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.643905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.644309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.645158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.646322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.647475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.648892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.649224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.649247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.649263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.649282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.651979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.652407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.652808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.653210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.654621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.655774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.657232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.658678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.659045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.659067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.659084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.659102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.661328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.661734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.662128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.662527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.664030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.665480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.666925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.667757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.668079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.668101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.668117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.123 [2024-07-23 04:39:32.668134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.670425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.670834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.671238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.672032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.673796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.675253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.676401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.677660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.678007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.678032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.678050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.678067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.680454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.680858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.681536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.682714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.684514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.685124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.686285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.687662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.687989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.688011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.688029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.688046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.690769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.691189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.691588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.691982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.692813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.693225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.693629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.694026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.694465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.694489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.694508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.694524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.697111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.697525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.697923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.698332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.699149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.699553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.699949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.700349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.700777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.700800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.700819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.700835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.703534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.703940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.704345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.704393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.705253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.705664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.706059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.706464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.706890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.706912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.706931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.706950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.709572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.709978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.710383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.710783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.710839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.711289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.711700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.712094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.712498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.712913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.713314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.713336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.713356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.713373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.715733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.715792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.715854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.715911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.716326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.716400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.716447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.716505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.716565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.716940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.124 [2024-07-23 04:39:32.716961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.716979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.716996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.719325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.719384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.719430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.719489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.719875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.719945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.719992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.720045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.720092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.720543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.720566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.720584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.720602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.722924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.722984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.723029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.723075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.723492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.723554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.723602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.723648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.723694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.724132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.724162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.724181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.724198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.726493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.726552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.726597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.726642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.727061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.727120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.727174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.727221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.727267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.727681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.727702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.727719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.727737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.729982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.730042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.730087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.730132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.730554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.730625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.730671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.730716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.730761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.731196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.731219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.731237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.731254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.733520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.733579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.733625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.733673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.734129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.734196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.734244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.734290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.734336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.734734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.734755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.734771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.734789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.737040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.737098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.737149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.737195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.737633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.737692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.737739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.737785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.737845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.738231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.738253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.738270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.738288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.740565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.740624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.125 [2024-07-23 04:39:32.740676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.740723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.741111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.741188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.741236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.741282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.741328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.741682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.741705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.741722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.741739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.744049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.744120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.744174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.744233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.744608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.744683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.744753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.744810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.744856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.745249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.745271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.745288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.745305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.747568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.747630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.747675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.747733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.748108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.748184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.748233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.748279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.748324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.748753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.748775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.748792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.748811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.751072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.751131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.751184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.751230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.751643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.751702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.751749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.751795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.751841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.752353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.752380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.752398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.752415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.754681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.754751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.754800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.754847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.755283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.755345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.755396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.755443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.755497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.755953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.755974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.755991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.756008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.758236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.758295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.758340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.758386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.758812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.758875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.758922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.758967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.759012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.759459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.759482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.759500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.759518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.761850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.761910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.761955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.762000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.762437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.762497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.762544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.762590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.762635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.126 [2024-07-23 04:39:32.763013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.763039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.763056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.763072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.765373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.765432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.765479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.765526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.765952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.766011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.766075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.766120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.766189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.766569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.766590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.766608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.766625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.769004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.769062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.769113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.769166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.769522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.769591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.769638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.769683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.769741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.770107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.770129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.770153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.770177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.772521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.772593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.772672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.772729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.773137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.773219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.773269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.773315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.773361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.773753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.773774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.773791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.773809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.776084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.776161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.776207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.776266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.776647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.776715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.776761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.776807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.776852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.777270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.777292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.777309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.777328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.779523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.779582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.779627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.779686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.780131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.780199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.780253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.780299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.780344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.780749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.780770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.780788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.780805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.783098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.783163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.783214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.783260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.783692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.783751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.783799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.783846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.783906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.784368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.784390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.784407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.127 [2024-07-23 04:39:32.784425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.786506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.786566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.786611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.786657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.787088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.787162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.787210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.787256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.787302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.787733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.787755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.787777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.787805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.789850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.789908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.789953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.789998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.790319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.790384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.790430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.790475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.790527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.790918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.790938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.790954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.790970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.792821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.792879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.792924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.792969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.793391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.793450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.128 [2024-07-23 04:39:32.793497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.793542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.793588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.793993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.794014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.794032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.794049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.795954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.796020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.796074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.796119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.796506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.796570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.796617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.796663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.796712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.797020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.797042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.797061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.797078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.799010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.799072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.799120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.799174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.799599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.799673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.799722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.799767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.799812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.800245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.800267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.800284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.800300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.802129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.802193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.802237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.802288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.802661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.802722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.802768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.802818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.802863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.803201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.803224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.803241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.803258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.805343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.805401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.805447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.805494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.805909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.805968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.806013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.806060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.806106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.806547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.806570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.806588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.806606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.808363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.808421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.808471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.808517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.808829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.808893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.808945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.809005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.809053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.809371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.809393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.809413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.809430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.811556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.811624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.811671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.811717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.812152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.812217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.812265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.812311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.812358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.812717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.812738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.812754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.129 [2024-07-23 04:39:32.812772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.814607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.814665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.814709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.814753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.815086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.815157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.815204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.815249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.815294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.815601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.815623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.815640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.815657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.817772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.817840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.818242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.818295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.818634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.818696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.818741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.818786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.818837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.819154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.819187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.819203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.819220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.821039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.821097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.821149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.822718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.823035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.823102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.823157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.823203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.823249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.823649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.823670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.823687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.823704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.827030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.828482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.829927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.830590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.830941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.832425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.833877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.835124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.835532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.835960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.835982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.835999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.836015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.839446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.840896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.841461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.842749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.843065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.844660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.846116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.846515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.846909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.847312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.847334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.847351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.847369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.850776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.851601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.853165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.854607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.854925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.856365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.856768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.857171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.857565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.858001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.858024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.858041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.858063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.861030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.862351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.863555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.865005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.865329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.865928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.866330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.866724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.130 [2024-07-23 04:39:32.867116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.867551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.867574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.867592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.867609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.870338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.871525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.872990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.874450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.874825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.875246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.875642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.876035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.876436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.876767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.876788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.876804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.876822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.879767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.881209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.882649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.883492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.883946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.884363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.884758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.885158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.886293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.886650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.886672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.886688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.886705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.889913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.891375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.892435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.892828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.893260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.893669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.894064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.894949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.896126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.896451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.896473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.896489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.896507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.899747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.901056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.901459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.901852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.131 [2024-07-23 04:39:32.902262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.902673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.903345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.904464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.905908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.906233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.906255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.906272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.906289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.909579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.909986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.910388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.910780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.911216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.911673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.912942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.914348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.915773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.916210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.916234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.916251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.916268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.918421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.918825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.919226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.919624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.920002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.393 [2024-07-23 04:39:32.921164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.922618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.924066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.924982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.925304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.925326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.925343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.925361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.927629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.928036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.928437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.929248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.929586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.931029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.932485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.933582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.934909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.935257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.935279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.935296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.935313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.937698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.938106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.938769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.939931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.940260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.941700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.942942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.944133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.945329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.945652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.945673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.945690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.945707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.948165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.948735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.949908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.951344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.951667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.953039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.954142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.955338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.956776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.957095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.957116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.957133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.957156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.959823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.961101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.962531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.963979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.964303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.965273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.966446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.967893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.969347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.969760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.969782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.969800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.969816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.973839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.975423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.976871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.978247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.978599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.979781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.981236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.982683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.983526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.983977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.983999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.984017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.984035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.987629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.989259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.990692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.991648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.992000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.993441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.994894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.995828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.996237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.394 [2024-07-23 04:39:32.996674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:32.996696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:32.996713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:32.996731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.000192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.001728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.002496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.003693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.004014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.005423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.006612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.007007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.007406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.007819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.007840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.007857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.007874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.011231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.011819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.012994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.014456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.014776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.016082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.016484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.016877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.017276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.017701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.017723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.017741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.017759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.020206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.021604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.023148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.024595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.024926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.025347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.025755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.026155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.026550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.026948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.026970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.026987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.027003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.030369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.031925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.033483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.034914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.035331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.035750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.036153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.036545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.037174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.037516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.037538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.037554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.037571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.040647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.042152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.043756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.044163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.044590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.045000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.045404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.045822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.047128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.047456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.047477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.047495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.047512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.050697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.052152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.052556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.052950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.053391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.053804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.054208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.055787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.057241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.057559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.057585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.057602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.057619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.060843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.061417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.395 [2024-07-23 04:39:33.061814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.062212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.062658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.063067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.064475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.065754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.067211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.067528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.067549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.067566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.067583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.070179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.070609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.071004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.071415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.071852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.072708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.074135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.075589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.076178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.076516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.076538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.076555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.076572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.078956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.079376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.079774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.080190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.080636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.081047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.081450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.081842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.082247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.082626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.082648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.082665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.082682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.085344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.085755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.086167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.086567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.087010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.087429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.087826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.088230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.088631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.089039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.089060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.089078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.089096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.091792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.092209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.092605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.092999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.093432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.093843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.094258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.094657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.095051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.095456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.095479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.095496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.095513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.098115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.098531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.098926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.099331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.099746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.100174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.100573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.100964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.101371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.101813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.101835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.101854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.101872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.104538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.104943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.105354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.105761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.106225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.106638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.107036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.107440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.107841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.396 [2024-07-23 04:39:33.108239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.108266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.108284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.108301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.111197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.111631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.112027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.112428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.112866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.113282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.113682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.114080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.114485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.114929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.114951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.114971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.114988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.117606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.118011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.118064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.118465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.118889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.119306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.119707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.120103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.120503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.120887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.120909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.120926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.120942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.123559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.123962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.124367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.124419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.124812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.125237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.125640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.126035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.126440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.126882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.126905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.126922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.126941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.129269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.129340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.129386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.129433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.129829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.129888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.129936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.129981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.130028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.130472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.130494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.130511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.130528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.132826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.132885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.132930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.132974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.133394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.133453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.133501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.133550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.133597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.133945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.133966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.133983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.134000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.136298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.136357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.136403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.136450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.136883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.136952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.136998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.137058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.137115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.137527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.137549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.137566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.397 [2024-07-23 04:39:33.137583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.139901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.139959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.140004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.140050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.140426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.140496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.140543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.140605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.140650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.141037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.141058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.141080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.141097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.143443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.143527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.143584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.143631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.143999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.144072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.144120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.144173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.144219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.144644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.144664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.144681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.144699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.147006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.147076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.147121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.147185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.147572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.147640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.147687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.147732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.147777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.148185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.148208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.148226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.148244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.150443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.150501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.150585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.150643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.151107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.151176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.151223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.151269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.151313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.151708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.151729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.151747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.151764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.154040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.154097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.154154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.154200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.154619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.154677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.154724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.154771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.154817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.155257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.155281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.155298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.155318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.157544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.157604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.157649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.157694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.398 [2024-07-23 04:39:33.158075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.158146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.158193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.158242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.158287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.158702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.158724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.158741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.158759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.160959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.161017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.161073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.161118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.161545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.161605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.161652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.161698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.161743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.162147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.162168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.162185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.162203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.164385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.164442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.164486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.164532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.164972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.165030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.165078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.165123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.165193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.165595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.165616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.165632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.165652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.167917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.167975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.168026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.168072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.168471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.168541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.168587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.168632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.168677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.169015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.169035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.169052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.169071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.171295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.171353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.171397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.171442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.171863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.171929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.171976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.172021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.172067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.172433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.172455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.172472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.399 [2024-07-23 04:39:33.172489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.664 [2024-07-23 04:39:33.174762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.664 [2024-07-23 04:39:33.174821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.664 [2024-07-23 04:39:33.174867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.664 [2024-07-23 04:39:33.174915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.664 [2024-07-23 04:39:33.175351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.664 [2024-07-23 04:39:33.175411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.664 [2024-07-23 04:39:33.175461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.664 [2024-07-23 04:39:33.175505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.664 [2024-07-23 04:39:33.175550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.664 [2024-07-23 04:39:33.175900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.664 [2024-07-23 04:39:33.175920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.664 [2024-07-23 04:39:33.175936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.664 [2024-07-23 04:39:33.175954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.664 [2024-07-23 04:39:33.177731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.664 [2024-07-23 04:39:33.177791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.664 [2024-07-23 04:39:33.177840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.664 [2024-07-23 04:39:33.177884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.664 [2024-07-23 04:39:33.178203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.664 [2024-07-23 04:39:33.178269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.664 [2024-07-23 04:39:33.178315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.664 [2024-07-23 04:39:33.178360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.664 [2024-07-23 04:39:33.178405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.178712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.178733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.178750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.178767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.181027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.181085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.181136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.181190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.181543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.181605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.181650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.181695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.181744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.182076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.182097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.182114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.182131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.183947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.184004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.184048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.184092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.184410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.184480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.184527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.184581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.184626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.184933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.184955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.184973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.184991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.187248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.187306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.187352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.187398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.187707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.187776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.187822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.187880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.187925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.188239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.188261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.188278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.188295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.190083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.190164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.190213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.190258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.190566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.190630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.190676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.190721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.190766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.191121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.191148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.191167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.191183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.193549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.193606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.193656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.193700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.194042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.194106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.194160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.194205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.194249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.194552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.194572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.194589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.194606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.196399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.196456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.196500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.196545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.196857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.196926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.196973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.197025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.197071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.197440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.197463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.197480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.197497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.199672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.199730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.199776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.199820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.200166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.200233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.200278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.200323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.200384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.200688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.200709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.200725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.200742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.202570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.202629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.202694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.202739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.203048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.203111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.203167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.203213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.203258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.203691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.203712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.203730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.203748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.205947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.206006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.206060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.206105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.206422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.206488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.206534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.206579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.206624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.206930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.206951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.206968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.206985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.208815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.208873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.208918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.208963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.209419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.209505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.209564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.209609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.209654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.210077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.210102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.210120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.210145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.212311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.212368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.212413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.212457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.212767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.212831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.212877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.665 [2024-07-23 04:39:33.212921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.212976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.213292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.213314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.213342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.213360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.215153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.215209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.215262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.215308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.215696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.215755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.215800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.215845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.215890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.216337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.216360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.216378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.216396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.218416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.218480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.218534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.218579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.218887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.218957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.219003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.219048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.219094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.219460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.219482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.219498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.219515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.221281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.221338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.221383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.221440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.221856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.221930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.221979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.222024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.222070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.222486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.222509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.222527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.222544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.224533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.224589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.224634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.224678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.224986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.225050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.225096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.225150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.225203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.225574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.225595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.225612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.225629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.227469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.227527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.227922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.227970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.228381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.228442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.228517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.228562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.228607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.229038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.229060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.229078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.229095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.230869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.230935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.230982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.232047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.232388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.232463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.232510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.232555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.232600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.232910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.232931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.232947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.232965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.235569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.236433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.237599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.239047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.239374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.240456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.241783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.242990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.244439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.244760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.244781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.244798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.244815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.247748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.248923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.250354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.251786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.252133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.253324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.254509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.255934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.257366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.257766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.257787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.257804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.257821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.261423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.262875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.264320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.265471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.265823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.267004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.268455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.269904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.270495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.270964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.270986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.271003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.271021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.274711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.276141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.277447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.278545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.278891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.280332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.281779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.282554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.282963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.283419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.283441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.283459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.283477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.287006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.288561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.289427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.290621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.666 [2024-07-23 04:39:33.290943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.292389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.293404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.293799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.294199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.294630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.294656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.294674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.294690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.298018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.298610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.299791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.301232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.301554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.302903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.303310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.303705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.304100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.304526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.304550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.304566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.304585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.307137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.308600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.310143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.311595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.311917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.312339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.312736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.313128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.313527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.313956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.313977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.313994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.314011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.317187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.318587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.320128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.321683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.322053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.322475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.322872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.323276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.323677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.323995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.324017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.324033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.324052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.327039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.328481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.329926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.330335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.330793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.331210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.331607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.331999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.333483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.333802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.333824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.333841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.333858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.337021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.338486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.339012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.339417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.339856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.340276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.340679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.341945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.343129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.343454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.343475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.343493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.343510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.346806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.347551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.347952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.348352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.348798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.349215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.350507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.351675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.353124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.353451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.353473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.353490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.353507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.356335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.356752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.357158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.357556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.357997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.358897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.360077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.361526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.362980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.363420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.363443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.363464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.363482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.365694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.366102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.366504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.366900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.667 [2024-07-23 04:39:33.367258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.368422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.369860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.371300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.371986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.372312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.372334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.372351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.372369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.374604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.375013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.375419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.376191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.376533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.377964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.379416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.380484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.381843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.382194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.382216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.382233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.382250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.384653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.385056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.385925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.387103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.387433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.388863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.389903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.391290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.392560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.392883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.392904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.392921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.392938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.395407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.395973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.397135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.398584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.398909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.400261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.401380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.402566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.403995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.404323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.404344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.404361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.404379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.407310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.408468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.409895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.411336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.411684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.412838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.414028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.415474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.416899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.417369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.417391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.417409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.417427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.421134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.422733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.424198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.425540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.425887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.427063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.428517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.429953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.430731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.431159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.431183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.431201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.431220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.434919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.436381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.437639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.438787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.439164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.440614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.442079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.442770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.443177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.443609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.443631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.443654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.668 [2024-07-23 04:39:33.443671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.447011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.447671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.448854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.450223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.450546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.450957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.451361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.451754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.452158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.452557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.452578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.452595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.452611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.455930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.457331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.458876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.460417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.460872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.461297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.461694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.462090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.462492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.462882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.462905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.462922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.462940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.465549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.465958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.466367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.466781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.467232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.467642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.468038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.468445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.468845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.469257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.469280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.469298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.469315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.472081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.472502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.472900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.473306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.473739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.474158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.474561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.474973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.475376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.475808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.475829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.475846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.475864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.478537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.478950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.479359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.479754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.480167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.480591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.480990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.481393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.481797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.931 [2024-07-23 04:39:33.482250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.482274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.482292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.482309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.484979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.485396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.485793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.486202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.486633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.487043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.487445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.487840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.488245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.488658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.488679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.488698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.488716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.491416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.491829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.492252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.492649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.493083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.493500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.493896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.494306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.494707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.495146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.495168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.495187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.495210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.497929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.498342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.498737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.499148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.499587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.500000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.500410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.500811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.501216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.501621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.501642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.501660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.501677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.504274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.504685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.505079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.505484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.505916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.506351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.506751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.507154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.507558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.507998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.508020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.508038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.508055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.510748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.511163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.511566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.511975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.512446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.512855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.513264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.513660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.514054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.514438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.514460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.514477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.514495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.517161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.517572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.517971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.518375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.518761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.519180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.519580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.519985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.520402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.520837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.520860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.520878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.520897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.523564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.523970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.524023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.524429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.932 [2024-07-23 04:39:33.524866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.525286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.525688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.526088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.526494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.526925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.526947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.526964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.526982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.529770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.530186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.530585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.530639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.531080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.531498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.531899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.532303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.532703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.533106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.533128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.533151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.533169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.535475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.535535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.535582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.535627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.536063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.536145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.536195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.536241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.536286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.536717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.536739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.536757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.536775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.539047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.539107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.539161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.539220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.539680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.539739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.539786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.539832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.539878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.540309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.540331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.540348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.540366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.542666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.542726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.542771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.542817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.543259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.543318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.543367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.543412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.543458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.543820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.543841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.543859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.543876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.546160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.546230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.546276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.546322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.546726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.546795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.546855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.546914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.546961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.547350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.547372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.547389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.547406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.549772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.549830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.549878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.549925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.550259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.550332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.550379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.550431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.550490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.550898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.933 [2024-07-23 04:39:33.550919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.550936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.550953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.553099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.553165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.553212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.553259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.553665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.553723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.553772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.553818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.553882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.554279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.554301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.554318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.554336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.556778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.556837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.556887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.556935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.557294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.557357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.557403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.557447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.557494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.557815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.557836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.557854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.557872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.559701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.559761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.559811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.559856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.560178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.560244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.560300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.560345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.560390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.560710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.560730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.560747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.560765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.563111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.563182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.563231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.563276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.563601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.563682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.563733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.563778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.563823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.564132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.564160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.564177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.564195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.566012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.566071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.566115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.566168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.566480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.566543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.566589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.566634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.566680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.567159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.567181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.567199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.567216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.569481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.569538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.569583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.569628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.569980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.570049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.570095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.570149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.570204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.570515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.570536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.570553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.570570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.572411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.572487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.572542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.572586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.572899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.934 [2024-07-23 04:39:33.572963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.573010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.573055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.573101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.573540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.573562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.573580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.573599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.575826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.575889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.575934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.575980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.576304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.576374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.576421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.576466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.576511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.576820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.576844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.576861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.576878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.578713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.578772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.578817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.578861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.579242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.579313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.579359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.579404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.579449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.579862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.579884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.579902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.579920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.582074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.582134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.582188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.582233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.582546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.582610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.582678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.582727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.582772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.583088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.583109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.583127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.583152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.584889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.584956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.585007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.585053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.585465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.585535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.585584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.585632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.585679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.586112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.586134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.586162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.586180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.588385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.588442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.588487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.588533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.588843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.588911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.588957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.589002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.589046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.589470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.589491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.589508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.589525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.591369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.591438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.591483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.591529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.591955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.592014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.592065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.592111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.592165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.592555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.592576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.592592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.935 [2024-07-23 04:39:33.592610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.594628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.594685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.594740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.594785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.595096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.595167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.595216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.595262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.595307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.595619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.595639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.595656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.595674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.597557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.597614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.597664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.597710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.598153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.598213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.598261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.598319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.598376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.598801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.598827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.598845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.598862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.600776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.600835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.600880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.600926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.601316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.601388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.601434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.601479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.601525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.601855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.601875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.601892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.601920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.603883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.603942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.603988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.604034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.604420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.604489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.604536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.604581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.604625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.605052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.605073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.605091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.605109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.606928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.606987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.607038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.607085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.607410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.607473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.607522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.607572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.607620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.607930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.607950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.607969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.607987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.609991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.610050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.610102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.610166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.610604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.610672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.936 [2024-07-23 04:39:33.610720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.610766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.610812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.611236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.611258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.611275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.611293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.613174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.613236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.613281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.613327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.613668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.613737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.613785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.613833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.613877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.614196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.614219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.614235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.614252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.616354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.616412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.616457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.616502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.616926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.616983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.617031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.617077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.617123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.617441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.617463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.617481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.617498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.619371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.619429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.619474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.619525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.619838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.619899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.619953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.620003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.620048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.620366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.620388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.620409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.620426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.622603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.622663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.622713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.622759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.623195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.623265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.623315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.623360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.623404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.623733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.623754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.623770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.623788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.625718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.625778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.625822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.625867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.626188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.626258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.626304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.626349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.626394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.626702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.626723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.626740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.626757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.629034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.629094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.629147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.629197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.629534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.629595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.629641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.629686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.629739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.630047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.630067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.630084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.630101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.631951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.632009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.937 [2024-07-23 04:39:33.633451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.633509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.633824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.633889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.633936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.633982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.634029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.634423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.634456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.634472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.634490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.636741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.636805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.636858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.638275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.638592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.638658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.638704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.638754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.638799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.639156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.639179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.639196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.639214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.641409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.641815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.642220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.642617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.643011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.644208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.645658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.647104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.648239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.648576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.648597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.648614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.648631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.650900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.651317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.651714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.652233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.652552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.654091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.655625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.657071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.658046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.658390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.658412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.658428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.658449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.660879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.661299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.661693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.663135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.663456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.664907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.666489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.667293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.668455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.668772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.668793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.668810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.668827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.671385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.671791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.673319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.674735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.675053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.676494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.677099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.678261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.679709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.680026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.680048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.680064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.680082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.682710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.684097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.685361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.686815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.687134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.687712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.688986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.690440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.691885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.938 [2024-07-23 04:39:33.692210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.939 [2024-07-23 04:39:33.692232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.939 [2024-07-23 04:39:33.692249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.939 [2024-07-23 04:39:33.692266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.939 [2024-07-23 04:39:33.695759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.939 [2024-07-23 04:39:33.696940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.939 [2024-07-23 04:39:33.698363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.939 [2024-07-23 04:39:33.699796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.939 [2024-07-23 04:39:33.700221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.939 [2024-07-23 04:39:33.701702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.939 [2024-07-23 04:39:33.703289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.939 [2024-07-23 04:39:33.704768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.939 [2024-07-23 04:39:33.706193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.939 [2024-07-23 04:39:33.706572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.939 [2024-07-23 04:39:33.706593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.939 [2024-07-23 04:39:33.706610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.939 [2024-07-23 04:39:33.706628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.710024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.711451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.712919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.713474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.713795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.715358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.716834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.718233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.718635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.719050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.719073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.719091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.719108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.722636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.724110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.724694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.726040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.726370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.727953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.729443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.729840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.730249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.730663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.730684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.730702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.730720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.734132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.734891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.736397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.737961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.738290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.739849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.740262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.740660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.741068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.741514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.741538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.741558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.741575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.744387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.745928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.747340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.748864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.749192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.749606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.750004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.750409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.750808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.751240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.751264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.751281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.751299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.754369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.755566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.757017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.758477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.758935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.759363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.759760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.760169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.760565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.760882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.760903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.760921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.760937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.763945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.765394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.766841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.767251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.767709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.768118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.768537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.768934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.201 [2024-07-23 04:39:33.770380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.770726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.770746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.770763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.770780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.774032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.775467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.775967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.776661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.777070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.777494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.778400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.779559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.780984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.781308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.781330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.781347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.781364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.784403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.784814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.785220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.785617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.786024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.786696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.787865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.789323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.790774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.791157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.791180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.791197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.791214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.793421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.793831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.794240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.794639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.795042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.796273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.797720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.799172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.800311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.800650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.800672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.800689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.800705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.803004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.803429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.803828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.804237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.804557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.806103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.807617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.808157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.809570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.809887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.809908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.809925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.809942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.812594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.814042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.815368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.816839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.817165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.817613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.819157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.820690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.822252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.822566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.822588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.822605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.822623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.825360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.825770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.826181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.826578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.826988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.827407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.827806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.828217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.828618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.829051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.829073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.829092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.829110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.831789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.832214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.202 [2024-07-23 04:39:33.832613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.833008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.833436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.833852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.834275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.834675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.835070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.835475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.835498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.835516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.835533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.838131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.838555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.838952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.839361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.839742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.840203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.840602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.840994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.841410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.841841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.841863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.841882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.841900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.844589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.844994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.845406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.845818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.846273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.846688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.847085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.847491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.847886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.848302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.848325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.848342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.848359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.851048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.851470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.851870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.852279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.852691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.853102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.853505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.853905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.854316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.854769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.854793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.854811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.854828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.857497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.857920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.858330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.858728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.859174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.859586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.859984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.860393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.860789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.861197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.861219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.861238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.861256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.863924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.864354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.864752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.865156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.865652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.866064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.866475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.866873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.867283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.867712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.867735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.867752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.867770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.870289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.871687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.872091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.872495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.872886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.873307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.874306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.875028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.203 [2024-07-23 04:39:33.875431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.875809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.875830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.875847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.875864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.879047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.879700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.880098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.881075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.881458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.881876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.882284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.882685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.883273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.883600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.883623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.883641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.883658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.886204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.886618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.887237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.888326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.888745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.889161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.890666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.891064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.891468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.891850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.891872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.891890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.891906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.895552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.895969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.896375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.896776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.897241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.898601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.899004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.899411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.900675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.901093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.901118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.901135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.901159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.903674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.904081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.905402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.905805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.906257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.906669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.907070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.907965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.908797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.909220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.909244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.909263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.909280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.911791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.912804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.913522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.914209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.914539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.915174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.915572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.915968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.916379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.916727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.916748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.916766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.916783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.919333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.919743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.919802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.920210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.920527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.921224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.921621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.922569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.923340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.923763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.923785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.923803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.923822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.926334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.927467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.204 [2024-07-23 04:39:33.928061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.928116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.928550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.928958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.929369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.930243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.931073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.931487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.931510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.931528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.931545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.933710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.933768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.933813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.933858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.934313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.934382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.934437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.934499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.934547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.934855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.934876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.934895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.934912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.937007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.937072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.937129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.937186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.937609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.937667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.937714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.937759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.937808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.938224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.938246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.938263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.938280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.940599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.940658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.940704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.940749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.941180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.941239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.941286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.941332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.941388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.941749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.941771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.941793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.941809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.944241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.944300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.944346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.944392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.944743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.944805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.944850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.944895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.944940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.945271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.945293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.945310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.945327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.947199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.947256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.947301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.947362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.947669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.947744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.947795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.947841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.947885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.948232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.948254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.948271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.948289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.950761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.950835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.950885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.950930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.205 [2024-07-23 04:39:33.951295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.951362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.951408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.951452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.951497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.951803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.951824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.951842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.951859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.953740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.953799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.953860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.953904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.954226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.954293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.954346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.954394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.954440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.954799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.954820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.954836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.954854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.957098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.957164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.957219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.957270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.957579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.957640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.957693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.957745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.957790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.958097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.958117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.958134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.958160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.960012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.960071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.960116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.960172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.960588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.960656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.960714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.960771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.960817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.961252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.961275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.961297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.961315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.963499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.963558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.963603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.963648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.963956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.964021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.964067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.964112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.964171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.964480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.964501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.964522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.964538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.966393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.966459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.966504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.966550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.966934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.967001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.967047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.967093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.967146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.967572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.967594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.967611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.967628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.206 [2024-07-23 04:39:33.969762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.969828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.969878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.969924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.970243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.970308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.970355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.970399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.970443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.970826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.970847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.970864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.970881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.972767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.972843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.972890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.972939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.973369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.973434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.973481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.973527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.973573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.973946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.973967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.973984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.974002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.976043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.976102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.976154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.976208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.976517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.976581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.976628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.976673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.976719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.977028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.977049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.977066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.977084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.979050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.979111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.979166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.979212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.979639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.979697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.979744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.979800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.979862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.980329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.980352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.980370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.207 [2024-07-23 04:39:33.980388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.468 [2024-07-23 04:39:33.982303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.468 [2024-07-23 04:39:33.982360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.468 [2024-07-23 04:39:33.982409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.468 [2024-07-23 04:39:33.982454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.468 [2024-07-23 04:39:33.982830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.468 [2024-07-23 04:39:33.982897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.468 [2024-07-23 04:39:33.982943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.982987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.983031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.983366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.983389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.983406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.983423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.987395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.987454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.987502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.987548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.987919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.987983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.988031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.988075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.988119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.988451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.988473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.988491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.988512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.992965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.993022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.993067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.993112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.993607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.993677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.993724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.993769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.993814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.994221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.994245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.994262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.994279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.997843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.997902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.997947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.997999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.998365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.998429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.998475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.998520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.998564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.998890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.998910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.998927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:33.998944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.002929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.002988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.003039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.003085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.003448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.003511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.003557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.003601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.003646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.003967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.003988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.004005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.004022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.008499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.008557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.008617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.008663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.009024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.009096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.009153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.009199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.009245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.009670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.009692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.009709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.009728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.013267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.013332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.013376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.013423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.013754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.013816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.013863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.013907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.013962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.014278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.014299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.014316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.014333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.469 [2024-07-23 04:39:34.018477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.018537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.018582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.018627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.018940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.019000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.019053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.019098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.019157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.019467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.019488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.019504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.019522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.024071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.024131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.024190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.024237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.024640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.024708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.024755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.024801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.024847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.025273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.025296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.025314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.025332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.029243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.029304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.029349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.029400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.029708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.029777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.029824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.029874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.029918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.030234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.030256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.030273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.030290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.034389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.034448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.034498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.034542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.034851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.034914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.034965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.035009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.035054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.035369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.035391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.035420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.035437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.040240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.040300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.040357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.040414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.040866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.040927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.040975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.041021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.041068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.041481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.041503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.041519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.041538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.045327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.045390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.045833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.045886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.045932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.046252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.046274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.046290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.063678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.064248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.068693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.068772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.070307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.070376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.071666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.072059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.072133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.072507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.072566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.072928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.470 [2024-07-23 04:39:34.072986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.073359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.073422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.074381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.074723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.074745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.074761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.074778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.078132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.079608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.080571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.080967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.081403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.081814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.082216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.083131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.084293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.084610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.084631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.084648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.084665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.087942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.089094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.089498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.089893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.090299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.090709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.091523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.092683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.094115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.094440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.094463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.094479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.094500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.097540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.097947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.098350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.098743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.099166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.099948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.101099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.102555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.104003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.104372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.104394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.104411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.104429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.106634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.107040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.107442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.107838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.108220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.109383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.110833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.112279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.113247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.113565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.113586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.113603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.113620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.115980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.116397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.116794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.117805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.118154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.119590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.121032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.121947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.123459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.123789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.123810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.123826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.123843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.126257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.126666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.127649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.128810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.129128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.130581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.131519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.132995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.134373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.134688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.134709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.134726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.134743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.471 [2024-07-23 04:39:34.137415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.138434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.139595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.141042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.141368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.142307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.143802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.145308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.146871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.147196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.147218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.147235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.147252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.150710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.151899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.153349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.154799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.155228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.156654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.158219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.159686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.161054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.161461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.161483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.161500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.161517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.164936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.166352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.167796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.168375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.168709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.170298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.171729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.173103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.173504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.173928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.173950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.173967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.173989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.177487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.178913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.179503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.180694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.181013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.182460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.183771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.184172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.184567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.184951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.184973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.184991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.185008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.187725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.189122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.190671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.192222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.192600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.193841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.194247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.194705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.195962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.196405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.196428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.196447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.196465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.199101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.200527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.202095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.203575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.203896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.204318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.204714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.205106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.205509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.205928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.205949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.205967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.205984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.209134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.210362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.211823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.213373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.213799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.214234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.214630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.472 [2024-07-23 04:39:34.215022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.215424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.215780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.215800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.215817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.215836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.218600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.219017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.219432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.219830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.220268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.220679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.221073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.221479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.221882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.222302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.222324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.222341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.222358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.225047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.225467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.225863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.226265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.226701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.227110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.227523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.227923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.228329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.228735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.228756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.228774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.228791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.231475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.231880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.232283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.232681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.233081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.233506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.233918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.234319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.234713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.235133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.235164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.235184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.235201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.237851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.238267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.238666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.239067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.239504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.239927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.240332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.240725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.241120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.241542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.241564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.241581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.241598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.244390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.244804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.245209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.245604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.246004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.246420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.246820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.247229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.247628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.248045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.248067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.248085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.473 [2024-07-23 04:39:34.248103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.250772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.251193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.251587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.251992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.252428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.252844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.253254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.253649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.254047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.254463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.254486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.254504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.254521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.257158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.257564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.257957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.258365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.258812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.259245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.259641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.260035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.260436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.260873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.260895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.260914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.260931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.263543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.263949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.264360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.264761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.265231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.265641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.266036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.266441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.266838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.267226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.267248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.267266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.267284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.269991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.270411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.270810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.271212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.271609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.272016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.272425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.272828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.273235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.273660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.273681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.273703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.273721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.276364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.276776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.277179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.277574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.277996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.278417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.278818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.279224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.279623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.280068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.280089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.736 [2024-07-23 04:39:34.280106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.280123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.282700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.283109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.283510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.283908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.284292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.284704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.285100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.285500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.285893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.286352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.286374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.286391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.286408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.289064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.289483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.289884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.290288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.290717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.291125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.291529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.291923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.292330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.292742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.292762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.292779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.292795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.295671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.296082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.296503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.296898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.297336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.297749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.298570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.299451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.300446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.300803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.300823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.300840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.300857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.303631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.304050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.304458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.304865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.305277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.305687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.306083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.306492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.306893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.307319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.307341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.307359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.307376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.310861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.312321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.312869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.314167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.314483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.316061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.317529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.317923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.318323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.318731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.318758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.318775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.318791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.322204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.322267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.322819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.324171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.324492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.326008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.327564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.327958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.328357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.328766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.328786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.328803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.328820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.334092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.335585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.337167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.337222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.337542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.737 [2024-07-23 04:39:34.337953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.338356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.338748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.339146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.339576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.339596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.339612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.339629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.342928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.343001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.344543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.344594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.344910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.346295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.346347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.346738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.346784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.347207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.347228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.347246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.347262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.350707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.350769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.352333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.352384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.352747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.353919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.353971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.355400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.355451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.355766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.355786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.355802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.355819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.358087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.358153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.358201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.358247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.358603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.358664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.358714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.358758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.358803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.359127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.359155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.359171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.359187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.361023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.361082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.361127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.361180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.361493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.361557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.361611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.361659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.361704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.362014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.362034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.362050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.362068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.364382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.364439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.364491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.364536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.364850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.364912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.364963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.365007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.365052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.365370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.365390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.365411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.365428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.367273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.367330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.367376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.367420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.367730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.367793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.367840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.367885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.367930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.368358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.368379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.368397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.738 [2024-07-23 04:39:34.368413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.370705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.370774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.370819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.370864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.371208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.371272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.371318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.371363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.371413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.371723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.371743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.371759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.371775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.373619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.373675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.373732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.373780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.374093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.374163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.374211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.374256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.374302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.374749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.374769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.374785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.374802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.377018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.377088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.377149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.377194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.377508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.377571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.377617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.377662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.377707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.378017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.378037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.378053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.378070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.379911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.379968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.380014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.380057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.380477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.380545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.380592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.380642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.380686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.381096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.381117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.381133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.381158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.383292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.383349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.383394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.383438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.383749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.383813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.383866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.383916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.383960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.384277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.384299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.384315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.384331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.386095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.386167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.386214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.386259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.386693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.386750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.386797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.386843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.386888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.387317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.387340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.387362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.387379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.389571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.389630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.389680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.389724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.390037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.390101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.390154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.739 [2024-07-23 04:39:34.390199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.390244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.390622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.390642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.390659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.390675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.392484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.392545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.392591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.392636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.393060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.393122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.393177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.393223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.393269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.393670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.393690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.393707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.393723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.395735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.395792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.395838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.395896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.396219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.396283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.396329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.396375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.396420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.396731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.396752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.396769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.396785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.398652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.398710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.398756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.398803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.399251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.399309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.399356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.399402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.399453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.399915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.399936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.399953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.399982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.401865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.401922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.401972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.402016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.402405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.402476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.402523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.402571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.402616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.402943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.402963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.402980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.402996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.404954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.405012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.405059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.405105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.405510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.405572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.405618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.405663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.405709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.406124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.406152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.406171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.406191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.407976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.408040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.408089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.408134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.408510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.408571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.408617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.408662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.408716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.409027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.409047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.409064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.409085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.411101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.740 [2024-07-23 04:39:34.411168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.411215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.411262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.411682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.411739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.411786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.411831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.411877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.412317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.412339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.412355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.412372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.414176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.414234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.414288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.414333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.414669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.414735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.414782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.414826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.414871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.415189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.415210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.415226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.415243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.417418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.417475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.417521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.417570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.417980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.418043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.418090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.418135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.418189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.418504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.418524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.418541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.418557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.420382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.420439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.420484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.420529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.420858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.420920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.420967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.421014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.421069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.421389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.421410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.421427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.421444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.423645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.423704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.423750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.423797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.424234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.424301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.424348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.424393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.424442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.424776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.424796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.424813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.424829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.426633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.426695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.426744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.426790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.427102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.427175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.427223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.741 [2024-07-23 04:39:34.427268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.427313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.427625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.427644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.427661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.427678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.430093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.430159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.430210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.430256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.430583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.430648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.430694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.430741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.430792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.431104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.431147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.431166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.431182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.432994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.433052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.433099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.433162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.433474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.433538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.433585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.433630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.433674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.434011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.434031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.434048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.434065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.436496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.436557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.436603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.436648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.436991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.437055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.437102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.437155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.437200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.437512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.437532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.437549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.437565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.439379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.439437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.439856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.439904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.439954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.440344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.440366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.440383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.450865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.451329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.455077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.455155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.455525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.456039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.456420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.456467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.456524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.456887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.457224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.459058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.459130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.460278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.460328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.460390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.461802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.462123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.462149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.462212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.462271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.462640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.462686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.462744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.463108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.463521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.463543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.463564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.466978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.467894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.469419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.470821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.471150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.471171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.742 [2024-07-23 04:39:34.472638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.473039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.473440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.473836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.474267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.474288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.474306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.477380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.478613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.479797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.481240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.481574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.481594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.482197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.482594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.482985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.483389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.483818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.483839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.483856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.486462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.487653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.489100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.490558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.490942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.490962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.491379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.491775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.492173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.492566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.492892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.492912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.492929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.495894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.497360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.498812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.499706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.500152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.500173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.500579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.500971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.501370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.502517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.502896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.502915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.502932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.506129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.507592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.508765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.509164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.509598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.509620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.510024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.510427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.511267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.512418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.512736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.512768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.743 [2024-07-23 04:39:34.512785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.516147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.517572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.517970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.518370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.518762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.518783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.519198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.519768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.520923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.522336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.522657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.522677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.522693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.525910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.526338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.526732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.527125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.527575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.527596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.528003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.529384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.530896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.532468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.532878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.532898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.532920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.535203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.535608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.536000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.537289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.537636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.537656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.539087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.540544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.541095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.542423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.542747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.542766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.542782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.545161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.545566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.546655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.042 [2024-07-23 04:39:34.547809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.548133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.548159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.549612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.550348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.551636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.552817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.553145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.553166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.553182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.555724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.556128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.556547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.556948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.557391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.557412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.557819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.558219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.558611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.559010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.559402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.559423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.559440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.562225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.562642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.563041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.563439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.563866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.563886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.564295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.564694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.565091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.565498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.565929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.565950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.565968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.568536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.568940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.569339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.569733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.570133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.570160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.570574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.570974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.571375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.571771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.572222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.572243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.572261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.574939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.575351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.575749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.576158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.576619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.576639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.577047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.577449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.577851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.578250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.578649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.578668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.578685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.581354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.581766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.582174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.582570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.583015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.583036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.583449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.583842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.584248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.584655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.585047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.585066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.585083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.587785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.588202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.588597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.588988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.589420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.589441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.589850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.590260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.590659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.591069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.043 [2024-07-23 04:39:34.591489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.591510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.591527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.594192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.594596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.594989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.595392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.595785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.595805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.596221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.596617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.597008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.597411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.597867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.597888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.597908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.600626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.601029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.601435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.601834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.602255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.602279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.602683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.603076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.603478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.603873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.604262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.604284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.604300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.606990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.607408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.607806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.608206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.608621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.608643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.609047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.609454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.609855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.610262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.610720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.610742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.610760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.613384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.613797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.614197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.614591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.615026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.615046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.615459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.615860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.616263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.616666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.617081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.617101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.617117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.619687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.620091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.620494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.620891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.621309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.621330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.621738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.622131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.622529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.622920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.623341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.623362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.623378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.626035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.626448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.626856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.627264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.627700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.627720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.628125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.628526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.628917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.629325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.629699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.629719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.629735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.632469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.632886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.633290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.633683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.044 [2024-07-23 04:39:34.634095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.634115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.634527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.635087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.636251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.636955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.637298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.637318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.637336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.640040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.640463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.640859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.641259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.641662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.641682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.642084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.642489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.642887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.643293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.643719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.643739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.643757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.646718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.647613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.648009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.648864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.649230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.649255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.649663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.650425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.651579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.653027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.653349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.653370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.653387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.656692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.657098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.657498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.657893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.658330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.658352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.658925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.660095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.661543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.662997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.663324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.663345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.663361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.665602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.666748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.667149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.667726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.668051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.668072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.668490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.668970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.670219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.671673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.671998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.672018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.672035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.675253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.675671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.676065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.676464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.676896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.676917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.677329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.678746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.680301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.681725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.682044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.682064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.682081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.683872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.684422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.685609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.686005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.686424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.686446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.687858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.688260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.688710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.690002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.690326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.690347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.045 [2024-07-23 04:39:34.690364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.693546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.695005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.695417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.695467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.695895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.695916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.696327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.696720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.697158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.698439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.698755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.698775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.698791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.701970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.703415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.703468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.704079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.704408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.704430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.704842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.705244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.705295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.706504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.706959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.706980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.706997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.709835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.709896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.711430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.712912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.713236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.713257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.714825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.714884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.715285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.715678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.716081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.716101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.716118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.718105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.719547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.720222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.720274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.720590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.720611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.720674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.722078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.723520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.723569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.723897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.723917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.723935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.725939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.726002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.726049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.726095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.726526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.726547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.726604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.726651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.726696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.726743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.727098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.727121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.727145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.728965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.729023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.729068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.729113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.729452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.729472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.729533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.729579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.729623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.729668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.729977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.729996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.730012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.732176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.732233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.732278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.732325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.732738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.732770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.732826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.046 [2024-07-23 04:39:34.732875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.732921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.732971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.733290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.733312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.733329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.735134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.735199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.735254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.735302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.735615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.735635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.735704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.735751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.735796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.735840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.736155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.736176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.736193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.738398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.738458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.738504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.738549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.738869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.738888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.738948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.738994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.739038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.739083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.739506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.739530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.739548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.741319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.741382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.741428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.741473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.741813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.741832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.741891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.741940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.741985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.742036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.742352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.742373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.742389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.744395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.744453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.744504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.744550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.745005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.745024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.745080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.745126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.745179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.745224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.745647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.745667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.745683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.747440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.747499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.747546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.747594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.747906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.747926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.747992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.748041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.748086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.748130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.748448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.748472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.748489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.047 [2024-07-23 04:39:34.750455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.750513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.750559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.750605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.750986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.751005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.751062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.751107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.751160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.751205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.751593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.751612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.751628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.753553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.753617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.753665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.753710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.754023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.754043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.754101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.754155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.754201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.754253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.754561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.754580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.754597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.756476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.756534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.756585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.756636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.757048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.757069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.757123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.757188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.757234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.757280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.757711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.757731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.757748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.759606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.759663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.759708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.759752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.760214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.760235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.760300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.760346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.760391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.760435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.760775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.760795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.760812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.762605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.762662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.762707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.762763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.763174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.763196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.763250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.763297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.763347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.763393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.763710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.763730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.763746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.765811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.765871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.765927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.765972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.766289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.766310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.766369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.766415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.766459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.766504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.766870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.766890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.766907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.768678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.768736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.768786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.768841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.769290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.769314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.769370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.769417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.769463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.048 [2024-07-23 04:39:34.769510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.769904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.769924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.769946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.771928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.771986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.772032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.772077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.772396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.772416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.772480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.772527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.772578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.772625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.772978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.772998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.773014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.774859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.774916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.774972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.775017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.775395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.775415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.775473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.775518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.775563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.775609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.776018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.776039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.776056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.778074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.778132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.778190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.778235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.778550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.778570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.778629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.778675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.778720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.778765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.779072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.779092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.779109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.780943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.781000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.781050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.781094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.781495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.781516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.781586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.781633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.781678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.781722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.782159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.782182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.782199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.784388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.784448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.784493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.784538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.784850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.784870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.784934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.784981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.785029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.785073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.785389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.785410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.785426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.787266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.787324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.787368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.787413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.787864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.787883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.787944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.787990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.788034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.788078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.788478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.788498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.788515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.790619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.790677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.790723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.790769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.791133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.049 [2024-07-23 04:39:34.791160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.791216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.791262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.791306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.791351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.791702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.791722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.791739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.793587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.793656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.793706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.793751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.794061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.794081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.794148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.794196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.794257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.794302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.794609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.794629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.794646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.796950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.797008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.797055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.797107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.797426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.797446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.797512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.797562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.797612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.797657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.797967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.797987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.798004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.799817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.799882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.799927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.799972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.800584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.800609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.800670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.800717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.800763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.800809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.801119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.801146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.801164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.803386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.803444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.803491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.803537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.803936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.803956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.804013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.804059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.804104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.804156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.804493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.804513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.804530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.806364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.806422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.806470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.806515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.806824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.806844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.806905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.806951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.806995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.807043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.807362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.807383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.807399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.810417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.810481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.810527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.810571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.810902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.810921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.810985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.811032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.811076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.811120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.811437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.811458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.811475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.813322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.050 [2024-07-23 04:39:34.813379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.051 [2024-07-23 04:39:34.813424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.051 [2024-07-23 04:39:34.814974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.051 [2024-07-23 04:39:34.815338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.051 [2024-07-23 04:39:34.815360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.051 [2024-07-23 04:39:34.815420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.051 [2024-07-23 04:39:34.815465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.051 [2024-07-23 04:39:34.815509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.051 [2024-07-23 04:39:34.815554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.051 [2024-07-23 04:39:34.815942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.051 [2024-07-23 04:39:34.815962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.051 [2024-07-23 04:39:34.815978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.051 [2024-07-23 04:39:34.818193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.051 [2024-07-23 04:39:34.818252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.051 [2024-07-23 04:39:34.819706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.051 [2024-07-23 04:39:34.819765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.051 [2024-07-23 04:39:34.820081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.051 [2024-07-23 04:39:34.820101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.051 [2024-07-23 04:39:34.820171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.051 [2024-07-23 04:39:34.820217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.821625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.821675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.822017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.822037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.822054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.823800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.824216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.824267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.824313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.824754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.824780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.824837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.825239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.825293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.825339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.825776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.825797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.825814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.832095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.832164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.832212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.833638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.834104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.834128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.835658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.835710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.313 [2024-07-23 04:39:34.835755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.836154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.836565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.836585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.836614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.840041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.841507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.842454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.843903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.844255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.844277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.845732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.847173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.847573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.847966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.848391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.848412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.848428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.853193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.854399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.855848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.857305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.857686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.857706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.859234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.859639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.860032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.861524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.861968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.861990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.862009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.865348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.866287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.867478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.868932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.869253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.869274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.870204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.870601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.870994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.871395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.871821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.871842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.871858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.877737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.879186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.880299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.881517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.881930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.881950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.882367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.883342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.884090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.884489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.884864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.884884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.884900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.888105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.889625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.890029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.890426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.890829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.890850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.891261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.891654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.893103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.894685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.895003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.895023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.895040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.901261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.901677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.902071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.903566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.904029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.904050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.904460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.905989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.907373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.908858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.909179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.909200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.909217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.911811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.912240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.314 [2024-07-23 04:39:34.912636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.913027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.913450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.913473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.914959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.916508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.917999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.919405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.919820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.919840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.919856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.924166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.925121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.925888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.926287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.926662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.926682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.927090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.927496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.927889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.928288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.928696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.928717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.928734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.931184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.932079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.932930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.933328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.933727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.933748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.934168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.934571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.934965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.935365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.935819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.935840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.935857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.941054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.941464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.941859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.942282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.942679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.942700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.943119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.943523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.943926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.944325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.944715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.944735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.944752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.948295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.948715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.949109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.949514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.949885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.949906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.950322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.950717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.951107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.951510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.951919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.951939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.951956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.955109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.955524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.955931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.956336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.956749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.956770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.957181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.957574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.957968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.958376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.958766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.958787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.958804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.961249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.961657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.962068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.962477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.962912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.962933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.963345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.963738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.964128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.315 [2024-07-23 04:39:34.964543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.964913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.964933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.964950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.970400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.970809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.971213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.971608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.972032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.972052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.972467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.972863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.973270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.974155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.974518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.974537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.974554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.977029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.977451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.977849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.978250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.978631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.978652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.979052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.979456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.979856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.980271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.980590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.980610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.980626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.983839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.984253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.984647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.985047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.985481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.985503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.985910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.986317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.987501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.988043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.988494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.988520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.988538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.991011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.991424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.991817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.992215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.992631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.992653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.993059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.993469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.994181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.995202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.995628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.995648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.995666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.999289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:34.999695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.000090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.000495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.000890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.000910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.001326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.002771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.003170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.003564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.003878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.003898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.003914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.006527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.006937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.007339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.007735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.008119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.008148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.008559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.009820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.010289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.010687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.011012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.011032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.011049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.014549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.014953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.015359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.015760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.316 [2024-07-23 04:39:35.016108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.016127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.017040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.017444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.018291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.019193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.019609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.019630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.019647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.022232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.022637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.023032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.023443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.023927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.023947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.025278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.025677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.026083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.027402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.027834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.027855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.027872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.032151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.032555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.032951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.033361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.033742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.033762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.035205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.035601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.035992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.037474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.037912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.037933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.037950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.040424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.040834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.042250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.042644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.043074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.043094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.043511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.043909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.045422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.045830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.046264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.046286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.046308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.052874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.053828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.055014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.056473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.056799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.056819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.057702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.059098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.059502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.059897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.060224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.060244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.060261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.063744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.065321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.066162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.067344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.067665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.067685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.069123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.070162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.071388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.071879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.072326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.072348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.072365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.078349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.078994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.080169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.081610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.081930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.081951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.083144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.084229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.084869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.085272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.085621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.317 [2024-07-23 04:39:35.085641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.318 [2024-07-23 04:39:35.085658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.318 [2024-07-23 04:39:35.089024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.318 [2024-07-23 04:39:35.090454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.318 [2024-07-23 04:39:35.091012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.318 [2024-07-23 04:39:35.092185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.318 [2024-07-23 04:39:35.092503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.318 [2024-07-23 04:39:35.092523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.318 [2024-07-23 04:39:35.093963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.095261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.096246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.096967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.097400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.097422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.097439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.101467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.102902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.103473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.104668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.104997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.105018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.106461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.107653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.108756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.109387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.109814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.109835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.109851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.113023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.114484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.115934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.115988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.116390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.116410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.117667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.119128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.120576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.121716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.122058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.122078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.122095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.126570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.128023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.128077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.129574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.129993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.130013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.131228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.132667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.132718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.134159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.134526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.134548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.134569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.579 [2024-07-23 04:39:35.136984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.137047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.137449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.139000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.139319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.139340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.140779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.140831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.142398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.143233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.143565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.143585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.143601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.148611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.150203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.150600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.150648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.151082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.151103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.151172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.152522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.154014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.154065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.154385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.154406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.154422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.156249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.156315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.156361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.156407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.156739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.156760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.156820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.156867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.156913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.156962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.157278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.157299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.157316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.160095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.160164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.160211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.160256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.160567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.160587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.160646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.160692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.160736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.160781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.161089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.161108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.161125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.162964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.163022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.163068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.163113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.163504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.163525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.163587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.163633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.163677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.163726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.164088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.164107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.164124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.166860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.166918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.166963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.167007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.167324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.167344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.167405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.167464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.167509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.167553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.167864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.167883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.167900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.169649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.169714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.169761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.169806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.170130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.170159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.170216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.580 [2024-07-23 04:39:35.170261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.170306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.170356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.170821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.170842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.170860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.173577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.173638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.173684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.173729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.174036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.174055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.174115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.174169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.174214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.174258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.174660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.174680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.174696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.176477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.176534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.176594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.176643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.176953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.176973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.177033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.177080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.177125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.177176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.177582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.177603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.177621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.181334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.181392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.181437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.181482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.181835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.181858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.181924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.181974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.182020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.182065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.182385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.182406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.182422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.184242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.184300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.184344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.184388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.184859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.184879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.184940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.184987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.185032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.185077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.185507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.185528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.185545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.190126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.190191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.190236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.190281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.190644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.190664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.190736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.190785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.190829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.190877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.191216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.191237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.191253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.193055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.193119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.193177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.193223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.193642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.193663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.193717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.193765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.193810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.193857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.194210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.194231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.194248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.198019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.198078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.581 [2024-07-23 04:39:35.198130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.198183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.198553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.198573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.198634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.198679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.198723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.198768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.199099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.199119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.199136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.200970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.201032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.201078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.201124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.201558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.201583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.201640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.201690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.201736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.201788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.202098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.202118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.202135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.205956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.206015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.206061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.206105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.206424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.206444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.206508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.206554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.206615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.206663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.206969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.206989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.207005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.208894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.208961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.209012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.209058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.209478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.209504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.209562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.209608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.209653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.209696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.210043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.210062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.210079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.213828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.213886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.213931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.213975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.214324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.214345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.214409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.214456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.214500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.214544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.214851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.214870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.214886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.216884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.216942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.216989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.217035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.217374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.217394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.217450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.217496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.217540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.217591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.218045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.218066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.218083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.221725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.221782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.221846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.221892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.222211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.222232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.222288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.222343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.222388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.222432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.582 [2024-07-23 04:39:35.222741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.222760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.222776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.224920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.224979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.225033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.225079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.225400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.225420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.225479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.225526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.225570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.225616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.226045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.226065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.226082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.230571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.230629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.230678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.230723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.231031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.231051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.231115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.231169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.231214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.231258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.231570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.231590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.231606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.233731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.233788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.233833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.233878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.234317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.234337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.234396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.234443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.234489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.234535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.234953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.234974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.234991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.239250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.239308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.239353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.239421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.239727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.239747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.239812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.239859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.239903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.239948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.240289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.240310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.240326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.242330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.242395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.242446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.242491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.242918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.242941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.242998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.243045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.243091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.243136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.243478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.243499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.243515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.246987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.247044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.247089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.247133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.247454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.247474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.247538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.247584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.247629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.247681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.248037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.248061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.248077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.250171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.250228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.250275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.250320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.250747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.583 [2024-07-23 04:39:35.250767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.250821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.250868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.250918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.250961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.251282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.251303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.251319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.255801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.255861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.255916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.255960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.256277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.256299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.256359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.256405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.256450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.256496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.256806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.256825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.256842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.259210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.259268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.259322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.259369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.259743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.259763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.259819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.259865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.259910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.259954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.260311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.260332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.260348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.265749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.265811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.265865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.265915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.266236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.266257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.266322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.266368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.266413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.266457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.266865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.266886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.266903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.268962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.269019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.269064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.270366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.270683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.270702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.270762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.270817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.270871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.270915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.271239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.271260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.271276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.275877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.275936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.276339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.276387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.276758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.276777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.276835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.276881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.277586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.277635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.278074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.584 [2024-07-23 04:39:35.278095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.278115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.282786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.284215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.284268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.284314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.284633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.284652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.284712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.285281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.285333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.285384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.285704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.285728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.285745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.289896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.289958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.290004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.291439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.291811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.291831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.293430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.293482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.293534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.294938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.295262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.295283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.295300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.299128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.300395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.301569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.303019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.303344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.303365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.304005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.305402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.306990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.308470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.308789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.308809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.308826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.313819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.315010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.316449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.317888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.318269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.318291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.319828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.321348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.322914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.324399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.324775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.324795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.324811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.330234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.331690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.332681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.333723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.334045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.334064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.335493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.336160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.337720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.338118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.338550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.338575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.338593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.345104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.346130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.347309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.348753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.349073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.349094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.349961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.351364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.351770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.352171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.352496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.352516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.352533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.356769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.357943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.585 [2024-07-23 04:39:35.359388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.360829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.361192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.361214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.362393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.362953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.363356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.364465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.364871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.364903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.364919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.370082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.371647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.373098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.374395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.374808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.374829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.376132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.376535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.376929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.378273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.378712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.378734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.378759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.383272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.383683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.384080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.384493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.384974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.384994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.386336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.386733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.387126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.388478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.388916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.388938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.388955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.394271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.394676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.395074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.395483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.395865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.395885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.397413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.397816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.398216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.399682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.400100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.400123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.400150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.405645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.406057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.406459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.406854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.847 [2024-07-23 04:39:35.407279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.407301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.408443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.409029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.409430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.410496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.410887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.410907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.410923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.415555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.416263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.416668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.417064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.417466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.417487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.418221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.419217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.419612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.420267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.420601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.420622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.420640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.424544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.425546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.425943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.426348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.426768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.426789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.427247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.428545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.428942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.429372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.429692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.429712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.429728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.433127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.434449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.434844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.435247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.435651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.435672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.436079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.437585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.437985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.438389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.438714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.438733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.438750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.442049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.443421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.443823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.444222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.444586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.444608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.445015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.446054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.446748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.447149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.447503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.447523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.447540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.451178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.452149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.452907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.453308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.453706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.453727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.454136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.454697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.455876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.456280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.456702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.456722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.456740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.460891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.461315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.462815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.463218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.463659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.463680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.464088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.464495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.465792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.848 [2024-07-23 04:39:35.466224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.466675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.466696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.466714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.471109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.471523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.472566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.473241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.473667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.473691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.474095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.474505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.475121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.476232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.476660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.476681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.476702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.481106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.481520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.481917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.483380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.483819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.483841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.484253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.484655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.485068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.486402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.486856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.486875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.486895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.490854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.491418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.491813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.492858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.493264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.493285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.493695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.494094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.494502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.495195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.495516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.495538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.495555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.498783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.500072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.500476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.500871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.501199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.501221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.501630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.502023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.502430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.502845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.503183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.503204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.503221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.506572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.507660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.508302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.508697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.509055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.509075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.510048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.510453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.510847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.512271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.512702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.512722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.512739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.516708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.517116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.517521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.518077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.518412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.518434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.518858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.519268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.520690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.521084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.521528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.521550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.521567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.524672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.525076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.525484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.525883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.849 [2024-07-23 04:39:35.526219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.526240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.526910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.527313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.528300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.529028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.529456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.529478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.529496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.533522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.533935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.534944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.536115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.536450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.536475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.537903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.538711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.540254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.541754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.542077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.542097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.542114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.546302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.547232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.548392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.549832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.550163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.550185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.551116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.552640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.554019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.555510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.555832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.555852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.555869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.560359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.561526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.562973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.564424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.564812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.564833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.566287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.567696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.569207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.570811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.571212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.571232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.571249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.576569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.578026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.579483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.580356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.580680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.580700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.581932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.583343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.584837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.585402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.585722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.585744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.585761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.589912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.591378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.592407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.593825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.594197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.594219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.595682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.597126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.597580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.598853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.599301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.599323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.599340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.602373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.603802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.604678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.606179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.606515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.606535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.607976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.609386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.609881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.611116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.611559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.611581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.850 [2024-07-23 04:39:35.611598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.851 [2024-07-23 04:39:35.616012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.851 [2024-07-23 04:39:35.617110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.851 [2024-07-23 04:39:35.618434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.851 [2024-07-23 04:39:35.618486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.851 [2024-07-23 04:39:35.618815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.851 [2024-07-23 04:39:35.618835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.851 [2024-07-23 04:39:35.620280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.851 [2024-07-23 04:39:35.621724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.851 [2024-07-23 04:39:35.622232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.851 [2024-07-23 04:39:35.623444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.851 [2024-07-23 04:39:35.623874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.851 [2024-07-23 04:39:35.623895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.851 [2024-07-23 04:39:35.623915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.628438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.629519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.629573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.631121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.631448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.631469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.632899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.634308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.634366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.635086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.635420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.635441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.635458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.639589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.639652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.641070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.641791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.642118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.642145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.643538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.643594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.645090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.646493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.646876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.646896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.646913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.650723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.651887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.653320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.653373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.653694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.653714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.653776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.654385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.655549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.655600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.655923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.655943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.655959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.659749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.659813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.659859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.659904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.660326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.660347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.660402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.660449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.660496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.660542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.660882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.660902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.660918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.664398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.664457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.664502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.664547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.664861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.664881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.664942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.664988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.665032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.665085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.665523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.665543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.665560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.669532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.669591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.669640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.669692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.670006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.670038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.670097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.670157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.670210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.670255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.670564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.670584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.670600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.114 [2024-07-23 04:39:35.674969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.675028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.675073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.675118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.675539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.675560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.675620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.675667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.675713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.675759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.676196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.676218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.676235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.681320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.681378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.681423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.681468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.681892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.681913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.681978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.682028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.682073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.682118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.682466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.682487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.682503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.687416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.687483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.687532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.687577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.687894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.687915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.687974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.688020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.688065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.688110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.688536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.688566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.688584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.693000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.693057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.693108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.693162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.693478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.693498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.693558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.693604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.693649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.693700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.694010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.694034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.694051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.697828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.697888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.697936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.697982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.698308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.698329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.698401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.698448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.698506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.698552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.698863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.698882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.698899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.703445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.703503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.703549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.703594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.703997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.704016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.704072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.704118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.704175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.704224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.704630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.704651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.704668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.708479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.708539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.708587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.708637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.708951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.708971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.709028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.709080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.115 [2024-07-23 04:39:35.709127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.709185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.709499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.709519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.709535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.713538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.713597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.713649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.713699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.714012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.714034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.714093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.714152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.714198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.714243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.714552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.714573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.714590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.719271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.719331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.719378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.719424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.719855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.719875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.719935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.719989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.720036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.720082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.720497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.720518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.720534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.724363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.724422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.724468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.724540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.724855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.724875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.724935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.724990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.725038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.725083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.725405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.725425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.725442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.729530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.729590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.729639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.729683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.730018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.730038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.730099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.730152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.730198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.730243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.730553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.730572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.730593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.735326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.735386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.735436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.735495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.735952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.735975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.736031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.736078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.736124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.736178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.736578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.736600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.736616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.740360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.740423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.740468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.740513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.740858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.740877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.740942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.740988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.741032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.741078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.741398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.741419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.741435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.745565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.745623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.116 [2024-07-23 04:39:35.745668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.745717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.746068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.746087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.746156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.746203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.746248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.746292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.746603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.746623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.746640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.751452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.751511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.751557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.751602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.752030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.752051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.752105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.752159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.752206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.752252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.752652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.752671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.752687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.756402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.756462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.756512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.756557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.756891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.756911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.756972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.757018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.757084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.757132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.757452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.757473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.757489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.761183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.761247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.761299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.761347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.761660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.761680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.761745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.761792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.761836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.761882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.762201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.762222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.762238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.766188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.766248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.766297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.766342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.766769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.766790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.766856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.766904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.766949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.766994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.767419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.767441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.767463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.771550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.771613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.771659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.771703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.772021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.772041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.772101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.772155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.772200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.772245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.772555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.772575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.772591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.775627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.775698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.775748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.775794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.776107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.776127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.776196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.776242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.776287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.117 [2024-07-23 04:39:35.776338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.776650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.776669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.776686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.781236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.781299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.781346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.781391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.781796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.781816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.781876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.781922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.781968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.782014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.782437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.782459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.782477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.787321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.787379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.787425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.787469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.787780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.787800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.787860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.787924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.787971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.788015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.788336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.788357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.788374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.791275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.791331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.791383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.791432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.791743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.791762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.791822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.791868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.791913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.791961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.792349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.792371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.792387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.797046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.797105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.797164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.797223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.797681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.797703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.797760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.797808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.797853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.797899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.798319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.798340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.798357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.803290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.803352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.803398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.803443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.803864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.803885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.803945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.803992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.804037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.804083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.804501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.804522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.804538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.808278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.808336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.808382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.809551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.118 [2024-07-23 04:39:35.809871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.809890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.809951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.809997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.810043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.810098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.810417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.810436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.810453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.813407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.813466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.814875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.814926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.815250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.815271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.815333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.815380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.816150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.816201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.816562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.816583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.816600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.820557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.820963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.821013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.821062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.821388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.821413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.821475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.822911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.822962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.823007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.823328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.823348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.823365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.828102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.828175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.828223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.828615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.829045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.829066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.829878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.829930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.829975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.831131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.831455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.831476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.831505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.836714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.837120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.837520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.837912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.838355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.838377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.838799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.839209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.839604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.839999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.840471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.840494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.840511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.843895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.844319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.844721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.845127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.845564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.845585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.845990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.846391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.846786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.847193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.847598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.847617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.847634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.851114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.851527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.851922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.852328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.852832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.852852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.853271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.853667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.854059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.119 [2024-07-23 04:39:35.854459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.854885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.854908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.854927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.858362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.858781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.859186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.859579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.859988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.860009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.860421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.860817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.861224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.861620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.862026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.862047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.862066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.865573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.865978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.866387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.866799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.867203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.867226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.867633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.868027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.868428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.868820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.869210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.869232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.869249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.872752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.873169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.873562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.873953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.874387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.874415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.874823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.875233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.875629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.876021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.876408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.876430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.876447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.879805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.880230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.880641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.881044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.881489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.881511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.881918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.882320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.882711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.883118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.883519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.883540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.883557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.886958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.887374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.887767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.888207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.888619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.888641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.889050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.889457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.889851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.890254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.890699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.890722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.120 [2024-07-23 04:39:35.890738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.894178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.894590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.894986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.895387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.895785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.895805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.896217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.896615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.897015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.897425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.897871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.897893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.897911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.901408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.901813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.902213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.902613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.902990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.903010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.903424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.903819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.904217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.904610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.905043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.905063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.905080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.908503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.908917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.909324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.909716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.910136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.910166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.910572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.910973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.911386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.911787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.912218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.912239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.912255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.916074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.916493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.916888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.917294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.917669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.917690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.918096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.918499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.918891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.919289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.919686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.919706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.919722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.923260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.923673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.924069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.924469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.924904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.924924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.925342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.925743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.926151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.926548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.926973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.926994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.927011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.383 [2024-07-23 04:39:35.931592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.932778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.934224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.935679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.936069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.936092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.936508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.936903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.937303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.937695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.938029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.938049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.938065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.941052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.942485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.943931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.944714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.945136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.945166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.945571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.945962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.946362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.947606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.947941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.947965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.947982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.951262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.952717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.953587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.953983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.954434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.954457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.954860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.955262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.956412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.957590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.957908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.957928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.957945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.961163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.962209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.962606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.962998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.963431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.963452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.963855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.964818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.965979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.967422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.967741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.967760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.967777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.970857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.971271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.971669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.972061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.972485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.972507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.973301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.974461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.975906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.977351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.977696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.977717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.977733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.979877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.980297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.980692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.981084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.981477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.981498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.982668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.984092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.985551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.986601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.986921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.986941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.986958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.989224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.989628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.990022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.990714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.991084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.991104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.384 [2024-07-23 04:39:35.992510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:35.993958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:35.995128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:35.996388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:35.996730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:35.996751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:35.996768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:35.999175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:35.999579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.000084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.001290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.001612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.001633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.003094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.004461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.005500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.006672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.006992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.007012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.007029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.009565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.009971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.011387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.012987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.013314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.013335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.014913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.015782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.016952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.018396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.018715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.018740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.018756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.021363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.022856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.024399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.025912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.026236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.026257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.027050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.028225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.029673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.031129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.031495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.031515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.031533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.035472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.036957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.038490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.040003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.040382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.040414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.041605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.043039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.044470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.045403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.045824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.045844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.045860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.049241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.050682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.052194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.052952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.053302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.053322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.054751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.056210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.057307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.057704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.058133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.058161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.058179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.061615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.063071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.063713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.064894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.065219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.065241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.066676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.067902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.068299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.068690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.069097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.069117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.069133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.385 [2024-07-23 04:39:36.072497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.073083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.074378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.075846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.076171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.076193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.077624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.078024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.078423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.078821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.079253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.079275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.079292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.081130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.081701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.083014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.084490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.084812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.084834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.086271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.086668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.087059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.087456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.087880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.087902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.087922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.090606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.092205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.093652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.093712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.094033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.094053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.095553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.095947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.096347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.096740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.097161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.097183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.097205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.099727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.101078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.101134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.102579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.102899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.102920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.104189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.104583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.104632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.105022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.105417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.105439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.105455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.108840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.108903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.109556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.110748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.111069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.111089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.112534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.112587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.113587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.113986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.114418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.114441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.114459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.116547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.118071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.119596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.119647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.119998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.120019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.120077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.121244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.122671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.122722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.123037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.123057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.123075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.125400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.125457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.125504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.125550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.125903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.125922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.125980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.126026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.386 [2024-07-23 04:39:36.126070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.126117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.126437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.126457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.126474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.128307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.128364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.128409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.128454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.128764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.128783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.128843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.128913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.128964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.129008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.129335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.129356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.129373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.131632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.131691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.131740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.131786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.132105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.132124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.132194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.132241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.132286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.132331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.132636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.132656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.132672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.134512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.134570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.134614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.134659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.134971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.134991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.135051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.135098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.135151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.135202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.135576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.135596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.135613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.137870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.137927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.137972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.138017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.138389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.138410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.138469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.138515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.138560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.138614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.138920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.138939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.138956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.140801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.140861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.140916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.140961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.141281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.141302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.141362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.141407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.141452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.141498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.141915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.141936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.141953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.144162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.144224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.144286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.144332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.144645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.144665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.144725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.144771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.144816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.144861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.145175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.145196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.145213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.147056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.147113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.387 [2024-07-23 04:39:36.147165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.147210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.147609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.147629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.147693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.147739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.147783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.147828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.148261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.148283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.148300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.150403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.150460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.150505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.150558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.150869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.150889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.150950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.151000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.151044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.151096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.151456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.151478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.151494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.153244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.153303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.153349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.153394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.153823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.153843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.153900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.153947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.153993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.154038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.154468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.154490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.154506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.156570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.156627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.156678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.156723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.157034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.157053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.157113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.157166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.157211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.157264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.157612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.157632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.157648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.159525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.159583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.159629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.159673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.160091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.160112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.160179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.160226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.160272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.160318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.160744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.160764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.160781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.162742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.162807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.162857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.388 [2024-07-23 04:39:36.162901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.651 [2024-07-23 04:39:36.163227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.651 [2024-07-23 04:39:36.163248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.651 [2024-07-23 04:39:36.163306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.651 [2024-07-23 04:39:36.163353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.651 [2024-07-23 04:39:36.163398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.651 [2024-07-23 04:39:36.163446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.651 [2024-07-23 04:39:36.163757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.651 [2024-07-23 04:39:36.163777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.651 [2024-07-23 04:39:36.163795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.651 [2024-07-23 04:39:36.165719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.651 [2024-07-23 04:39:36.165778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.651 [2024-07-23 04:39:36.165825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.165871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.166295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.166321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.166376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.166423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.166468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.166513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.166925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.166946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.166964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.168848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.168905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.168975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.169022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.169342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.169362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.169426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.169476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.169521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.169566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.169873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.169892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.169908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.172116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.172181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.172230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.172276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.172681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.172700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.172762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.172809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.172853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.172897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.173245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.173266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.173283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.175103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.175168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.175214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.175259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.175569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.175588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.175647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.175693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.175737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.175782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.176089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.176109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.176125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.178441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.178500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.178558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.178603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.178914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.178934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.178991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.179044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.179089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.179151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.179461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.179480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.179497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.181288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.181348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.181405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.181454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.181764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.181783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.181844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.181889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.181934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.181979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.182402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.182423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.182440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.184909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.184969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.185015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.185060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.185401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.185421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.185486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.185533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.185578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.185622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.185933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.185953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.185970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.187791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.187848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.652 [2024-07-23 04:39:36.187893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.187937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.188256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.188282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.188347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.188395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.188447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.188492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.188900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.188920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.188936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.191160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.191217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.191263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.191313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.191623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.191643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.191700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.191754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.191802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.191847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.192161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.192182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.192198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.194059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.194117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.194179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.194228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.194538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.194558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.194620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.194666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.194712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.194759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.195190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.195212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.195228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.197583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.197645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.197718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.197777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.198182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.198203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.198270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.198316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.198361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.198406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.198826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.198847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.198864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.201096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.201161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.201220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.201265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.201685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.201708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.201763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.201810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.201855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.201900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.202297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.202319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.202334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.204591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.204649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.204700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.204745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.205186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.205207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.205263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.205309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.205366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.205411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.205834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.205854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.205871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.208126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.208191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.208244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.208290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.208682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.208702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.208756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.208801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.208847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.208892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.209327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.209348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.209365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.211982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.212044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.653 [2024-07-23 04:39:36.212091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.212137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.212563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.212583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.212656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.212716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.212783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.212839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.213283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.213303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.213320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.215615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.215673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.215719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.216115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.216490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.216512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.216576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.216622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.216675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.216721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.217164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.217186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.217204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.219532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.219602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.220007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.220056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.220492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.220515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.220572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.220619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.221011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.221058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.221479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.221504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.221522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.223740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.224156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.224207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.224252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.224687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.224708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.224763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.225168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.225221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.225286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.225699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.225720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.225736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.228714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.228783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.228829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.229237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.229692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.229713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.230116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.230175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.230222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.230613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.231062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.231082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.231099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.233677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.234083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.234491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.234895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.235343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.235366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.235769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.236170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.236561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.236955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.237344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.237365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.237382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.240057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.240474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.240874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.241273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.241671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.241691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.242094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.242499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.242897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.243299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.243733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.243754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.243772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.246393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.246796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.247195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.654 [2024-07-23 04:39:36.247587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.248010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.248032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.248449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.248865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.249266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.249659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.250101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.250122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.250146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.252732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.253136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.253536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.253935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.254337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.254358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.254766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.255166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.255557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.255948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.256313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.256335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.256352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.258983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.259397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.259796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.260196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.260609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.260630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.261031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.261431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.261828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.262241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.262627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.262688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.262708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.265349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.265759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.266162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.266554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.266980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.267001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.267412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.267814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.268221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.268627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.269041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.269062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.269079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.271740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.272157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.272552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.272941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.273287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.273308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.274364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.275227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.276072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.276477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.276914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.276937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.276956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.279557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.279960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.280361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.280760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.281149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.281170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.281578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.281973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.282373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.282765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.283189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.283213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.283231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.285819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.286230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.286626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.287024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.287408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.287429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.287834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.288232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.288625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.289017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.289467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.289488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.289504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.292152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.293564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.295125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.296572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.296890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.655 [2024-07-23 04:39:36.296910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.297776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.298968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.300426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.301848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.302208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.302228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.302246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.306143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.307614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.309188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.310688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.311053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.311073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.312243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.313693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.315132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.316104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.316510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.316531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.316547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.319894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.321338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.322788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.323341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.323663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.323684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.325237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.326681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.328012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.328415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.328827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.328848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.328870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.332344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.333804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.334430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.335855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.336176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.336196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.337745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.339319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.339716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.340110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.340520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.340541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.340559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.344050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.345195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.346493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.347677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.347995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.348014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.349450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.350015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.350414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.350806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.351262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.351283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.351300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.354567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.355647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.356833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.358280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.358601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.358621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.359439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.359846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.360245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.360659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.361072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.361095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.361113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.363397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.364592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.366040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.367497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.656 [2024-07-23 04:39:36.367833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.367853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.368269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.368664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.369054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.369452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.369831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.369851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.369868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.373147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.374737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.376246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.377644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.378024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.378044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.378458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.378856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.379255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.379696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.380010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.380030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.380047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.382984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.384441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.385898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.386328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.386758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.386779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.387191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.387591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.387981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.389541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.389861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.389881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.389897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.393116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.394585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.395306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.395711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.396148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.396170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.396579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.396975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.398075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.399264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.399585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.399606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.399622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.402854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.403953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.404353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.404746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.405144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.405165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.405571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.406462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.407616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.409045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.409366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.409387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.409403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.412649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.413057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.413458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.413852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.414299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.414321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.414724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.416102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.417656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.419090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.419414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.419435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.419452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.421744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.422158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.422551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.422940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.423380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.423407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.424907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.426272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.427697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.429239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.429717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.429736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.657 [2024-07-23 04:39:36.429752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.431916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.432340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.432737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.433133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.433485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.433505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.434676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.436090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.437544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.438205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.438521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.438541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.438557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.440827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.441238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.441634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.442581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.442937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.442958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.444377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.445803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.446754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.448221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.448563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.448583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.448600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.451002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.451417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.451914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.453149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.453466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.453487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.454985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.456370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.457465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.458650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.458968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.458987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.459005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.461450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.461855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.463304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.464894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.465222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.465244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.466811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.467626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.468790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.470243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.470564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.470584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.470600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.473205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.474582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.475801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.477250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.477569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.477590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.478157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.479406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.480855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.482319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.482643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.482663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.482680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.484999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.486368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.487595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.489026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.489350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.489372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.489933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.491230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.492681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.494121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.494451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.494472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.494489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.497607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.498790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.500243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.500296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.500613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.500637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.501349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.502812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.504374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.505814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.506133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.919 [2024-07-23 04:39:36.506158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.506174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.509436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.510616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.510670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.512100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.512426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.512447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.513123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.514541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.514601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.516053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.516375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.516396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.516412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.518999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.519059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.520145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.521312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.521631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.521651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.523072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.523125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.523743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.525169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.525491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.525510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.525527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.527529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.527935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.528334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.528383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.528718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.528738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.528795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.529953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.531363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.531414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.531730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.531750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.531767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.533619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.533676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.533721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.533766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.534218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.534240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.534304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.534350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.534395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.534439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.534844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.534866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.534883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.536976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.537033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.537082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.537127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.537449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.537469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.537529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.537583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.537632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.537676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.537987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.538006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.538023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.539793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.539858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.539904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.539949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.540375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.540407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.540462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.540509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.540554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.540601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.541025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.541045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.541062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.543126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.543199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.543245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.543290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.543602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.543621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.543686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.543731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.543776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.543820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.544231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.544252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.544269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.546035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.546092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.546162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.546210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.546635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.546655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.546710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.546757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.546803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.546849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.547245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.547267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.547285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.549276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.549333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.549378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.549424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.549852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.549873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.549935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.549981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.550026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.550071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.550456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.550481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.550497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.552586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.920 [2024-07-23 04:39:36.552655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.552707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.552753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.553181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.553203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.553258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.553305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.553351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.553396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.553758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.553778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.553795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.555626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.555682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.555728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.555772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.556123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.556150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.556214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.556260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.556305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.556349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.556659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.556678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.556695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.558754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.558812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.558861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.558906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.559361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.559382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.559438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.559485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.559531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.559577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.559886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.559906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.559922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.561745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.561812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.561857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.561908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.562233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.562255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.562312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.562362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.562411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.562456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.562765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.562785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.562801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.565045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.565103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.565160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.565207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.565632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.565652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.565710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.565760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.565805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.565849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.566206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.566227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.566244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.568045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.568107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.568160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.568205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.568521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.568541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.568605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.568652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.568697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.568742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.569053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.569073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.569090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.571474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.571533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.571580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.571636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.571996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.572016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.572074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.572120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.572172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.572217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.572543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.572567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.572584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.574392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.574449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.574495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.574540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.574850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.574871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.574931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.574979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.575034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.575079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.575396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.575418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.575435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.577688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.577745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.577796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.577846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.578164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.578186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.578245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.921 [2024-07-23 04:39:36.578310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.578357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.578402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.578712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.578731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.578747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.580556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.580613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.580670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.580723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.581033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.581053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.581116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.581171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.581217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.581262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.581601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.581622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.581638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.583896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.583953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.583999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.584044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.584411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.584432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.584520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.584580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.584638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.584686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.585082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.585102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.585118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.587371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.587429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.587475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.587521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.587910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.587931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.587996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.588046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.588103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.588167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.588603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.588624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.588641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.590914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.590973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.591047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.591105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.591489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.591510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.591565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.591611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.591656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.591704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.592134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.592162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.592179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.594384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.594442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.594488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.594538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.594975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.594995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.595064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.595112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.595166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.595212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.595624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.595645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.595666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.597951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.598009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.598054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.598100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.598520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.598542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.598596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.598643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.598689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.598735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.599186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.599206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.599223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.601467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.601526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.601572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.601618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.602030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.602050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.602113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.602169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.602214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.602259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.602665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.922 [2024-07-23 04:39:36.602686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.602703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.604980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.605054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.605108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.605161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.605584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.605606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.605663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.605710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.605756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.605801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.606224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.606245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.606262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.608528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.608587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.608632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.608679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.609099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.609120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.609188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.609237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.609282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.609328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.609688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.609708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.609725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.611991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.612049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.612095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.612150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.612586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.612605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.612670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.612717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.612783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.612843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.613251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.613271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.613288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.615607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.615665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.615712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.615757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.616115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.616135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.616207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.616254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.616299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.616344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.616745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.616766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.616783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.619255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.619324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.619384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.619443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.619912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.619932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.619996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.620057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.620104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.620158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.620556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.620574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.620591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.623184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.623255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.623302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.623348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.623763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.623782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.623841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.623889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.623934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.623979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.624421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.624444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.624460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.626811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.626879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.626925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.627327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.627748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.627768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.627834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.627880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.627925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.627969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.628384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.628405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.628422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.630664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.630725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.631118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.631172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.631610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.631631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.631689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.631737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.632132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.632192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.923 [2024-07-23 04:39:36.632548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.632568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.632585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.634870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.635287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.635344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.635389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.635766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.635785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.635849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.636261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.636310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.636356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.636780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.636801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.636818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.639463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.639526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.639572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.639967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.640409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.640431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.640835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.640894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.640950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.641362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.641745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.641766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.641795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.644441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.644867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.645270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.645662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.646111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.646129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.646542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.646939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.647348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.647743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.648183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.648205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.648222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.650837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.651251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.651646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.652041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.652486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.652507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.652933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.653344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.653739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.654129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.654561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.654582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.654599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.657232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.657643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.658037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.658443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.658822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.658842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.659257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.659651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.660042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.660442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.660839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.660866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.660883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.663549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.664306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.665257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.666176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.666535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.666557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.666966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.667367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.667759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.668158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.668556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.668575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.668593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.671188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.671597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.671998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.672409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.672859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.672885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.673298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.673690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.674082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.674489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.674874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.674894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.674911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.677798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.678217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.678616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.679010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.679400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.679421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.679827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.680231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.680629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.681026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.681465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.681488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.681505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.685034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.686502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.687510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.688911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.689278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.689300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.924 [2024-07-23 04:39:36.690761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.925 [2024-07-23 04:39:36.692216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.925 [2024-07-23 04:39:36.692635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.925 [2024-07-23 04:39:36.693034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.925 [2024-07-23 04:39:36.693464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.925 [2024-07-23 04:39:36.693486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.925 [2024-07-23 04:39:36.693503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.925 [2024-07-23 04:39:36.697031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.925 [2024-07-23 04:39:36.698272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.925 [2024-07-23 04:39:36.699475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.185 [2024-07-23 04:39:36.700667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.700989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.701009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.702448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.703060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.703466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.703862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.704283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.704304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.704321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.707681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.708431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.709629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.711076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.711405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.711426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.712524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.712921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.713322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.713716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.714128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.714155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.714183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.716598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.717892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.719332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.720805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.721153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.721174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.721587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.721985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.722391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.722785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.723194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.723215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.723232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.726587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.728164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.729677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.731120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.731507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.731528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.731934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.732339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.732734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.733135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.733456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.733477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.733494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.736521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.737977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.739486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.739889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.740337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.740360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.740767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.741176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.741568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.743145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.743466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.743486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.743503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.746710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.748181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.748849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.749259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.749685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.749707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.750113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.750516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.751631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.752797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.753118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.753145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.753162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.756412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.757405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.757807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.758212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.758599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.758619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.759029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.186 [2024-07-23 04:39:36.759905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.761071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.762523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.762842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.762862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.762879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.766109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.766533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.766932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.767336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.767762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.767785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.768239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.769509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.770950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.772407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.772725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.772746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.772762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.774929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.775345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.775741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.776136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.776588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.776609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.778158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.779675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.781264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.782686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.783092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.783112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.783128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.785717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.786126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.786538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.787586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.787941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.787961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.789380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.790811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.791609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.793173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.793497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.793516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.793533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.795903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.796319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.797097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.798262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.798583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.798603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.800039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.801100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.802448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.803672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.803993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.804013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.804029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.806493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.806922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.808236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.809717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.810037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.810057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.811519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.812512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.813696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.815154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.815473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.815493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.815510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.818220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.819758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.821276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.822828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.823155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.187 [2024-07-23 04:39:36.823176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.823968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.825137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.826595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.828032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.828386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.828407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.828425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.832014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.833211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.834642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.836075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.836475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.836496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.837852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.839371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.840809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.842148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.842518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.842542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.842558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.845935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.847394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.848831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.849449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.849770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.849790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.851223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.852785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.854272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.854668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.855085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.855106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.855123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.858678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.860151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.861192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.862595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.862954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.862975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.864421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.865872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.866329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.866730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.867179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.867202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.867220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.870847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.872168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.873270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.874466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.874792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.874812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.876248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.876918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.877327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.877719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.878116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.878135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.878159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.881456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.882150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.883345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.884786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.885108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.885128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.886314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.886714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.887108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.887510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.887933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.887954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.887971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.890472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.891848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.893350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.894803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.895125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.895153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.895563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.895963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.896367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.896761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.897186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.188 [2024-07-23 04:39:36.897207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.897224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.900231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.901463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.902915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.904342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.904725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.904747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.905164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.905560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.905965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.906372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.906694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.906714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.906730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.908619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.909804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.911242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.912689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.913093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.913113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.913808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.914222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.914619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.915042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.915366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.915388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.915410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.918381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.919830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.921270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.921322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.921775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.921795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.922211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.922605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.923002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.923401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.923717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.923737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.923754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.926758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.928194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.928247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.929697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.930076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.930096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.930522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.930917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.930976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.931384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.931816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.931837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.931855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.934525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.934589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.935751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.937209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.937530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.937551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.938208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.938268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.938661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.939053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.939477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.939499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.939517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.941527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.941646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.189 [2024-07-23 04:39:36.943074] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:44:28.189 [2024-07-23 04:39:36.943154] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:44:28.189 [2024-07-23 04:39:36.943585] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:44:28.189 [2024-07-23 04:39:36.945429] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:44:28.189 [2024-07-23 04:39:36.945495] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:44:28.189 [2024-07-23 04:39:36.945549] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:44:28.189 [2024-07-23 04:39:36.945601] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:44:29.127 00:44:29.127 Latency(us) 00:44:29.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:29.127 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:29.127 Verification LBA range: start 0x0 length 0x100 00:44:29.127 crypto_ram : 5.98 42.79 2.67 0.00 0.00 2916240.59 271790.90 2483027.97 00:44:29.127 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:29.127 Verification LBA range: start 0x100 length 0x100 00:44:29.127 crypto_ram : 5.88 43.55 2.72 0.00 0.00 2856180.12 315411.66 2308544.92 00:44:29.127 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:29.127 Verification LBA range: start 0x0 length 0x100 00:44:29.127 crypto_ram1 : 5.98 42.79 2.67 0.00 0.00 2823140.15 270113.18 2308544.92 00:44:29.127 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:29.127 Verification LBA range: start 0x100 length 0x100 00:44:29.127 crypto_ram1 : 5.88 43.54 2.72 0.00 0.00 2764836.04 313733.94 2134061.88 00:44:29.127 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:29.127 Verification LBA range: start 0x0 length 0x100 00:44:29.127 crypto_ram2 : 5.59 274.00 17.13 0.00 0.00 423911.71 100663.30 617401.55 00:44:29.127 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:29.127 Verification LBA range: start 0x100 length 0x100 00:44:29.127 crypto_ram2 : 5.57 294.32 18.40 0.00 0.00 393981.46 80111.21 597268.89 00:44:29.127 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:29.127 Verification LBA range: start 0x0 length 0x100 00:44:29.127 crypto_ram3 : 5.66 279.64 17.48 0.00 0.00 401178.88 27053.26 360710.14 00:44:29.127 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:29.127 Verification LBA range: start 0x100 length 0x100 00:44:29.127 crypto_ram3 : 5.62 300.14 18.76 0.00 0.00 375143.81 12215.91 463051.16 00:44:29.127 =================================================================================================================== 00:44:29.127 Total : 1320.78 82.55 0.00 0.00 732800.85 12215.91 2483027.97 00:44:31.663 00:44:31.663 real 0m12.944s 00:44:31.663 user 0m23.910s 00:44:31.663 sys 0m0.637s 00:44:31.663 04:39:40 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:31.663 04:39:40 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:44:31.663 ************************************ 00:44:31.663 END TEST bdev_verify_big_io 00:44:31.663 ************************************ 00:44:31.663 04:39:40 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:44:31.663 04:39:40 blockdev_crypto_qat -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:31.663 04:39:40 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:44:31.663 04:39:40 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:31.663 04:39:40 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:44:31.663 ************************************ 00:44:31.663 START TEST bdev_write_zeroes 00:44:31.663 ************************************ 00:44:31.663 04:39:40 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:31.663 [2024-07-23 04:39:40.407365] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:44:31.663 [2024-07-23 04:39:40.407478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2941299 ] 00:44:31.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.922 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:31.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.922 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:31.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.922 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:31.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.922 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:31.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.922 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:31.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.922 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:31.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.922 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:31.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.922 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:31.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.922 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:31.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.922 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:31.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.922 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:31.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.922 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:31.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.922 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:31.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.922 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:31.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.922 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:31.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.922 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:31.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.922 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:31.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.922 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:31.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.922 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:31.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.922 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:31.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.923 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:31.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.923 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:31.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.923 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:31.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.923 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:31.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.923 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:31.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.923 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:31.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.923 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:31.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.923 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:31.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.923 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:31.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.923 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:31.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.923 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:31.923 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.923 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:31.923 [2024-07-23 04:39:40.634187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:32.182 [2024-07-23 04:39:40.918183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:32.182 [2024-07-23 04:39:40.939932] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:44:32.182 [2024-07-23 04:39:40.947961] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:44:32.182 [2024-07-23 04:39:40.955966] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:44:32.751 [2024-07-23 04:39:41.343837] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:44:36.040 [2024-07-23 04:39:44.200968] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:44:36.040 [2024-07-23 04:39:44.201049] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:44:36.040 [2024-07-23 04:39:44.201072] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:36.041 [2024-07-23 04:39:44.208983] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:44:36.041 [2024-07-23 04:39:44.209022] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:44:36.041 [2024-07-23 04:39:44.209038] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:36.041 [2024-07-23 04:39:44.217021] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:44:36.041 [2024-07-23 04:39:44.217054] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:44:36.041 [2024-07-23 04:39:44.217069] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:36.041 [2024-07-23 04:39:44.225023] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:44:36.041 [2024-07-23 04:39:44.225054] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:44:36.041 [2024-07-23 04:39:44.225074] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:36.041 Running I/O for 1 seconds... 00:44:36.978 00:44:36.978 Latency(us) 00:44:36.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:36.978 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:36.978 crypto_ram : 1.03 1890.36 7.38 0.00 0.00 67085.48 6815.74 82208.36 00:44:36.978 Job: crypto_ram1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:36.978 crypto_ram1 : 1.03 1903.52 7.44 0.00 0.00 66244.39 6579.81 75497.47 00:44:36.978 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:36.978 crypto_ram2 : 1.02 14575.89 56.94 0.00 0.00 8633.82 2634.55 11429.48 00:44:36.978 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:36.978 crypto_ram3 : 1.02 14556.09 56.86 0.00 0.00 8603.45 2621.44 9017.75 00:44:36.978 =================================================================================================================== 00:44:36.978 Total : 32925.85 128.62 0.00 0.00 15337.27 2621.44 82208.36 00:44:39.516 00:44:39.516 real 0m7.784s 00:44:39.516 user 0m7.197s 00:44:39.516 sys 0m0.524s 00:44:39.516 04:39:48 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:39.516 04:39:48 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:44:39.516 ************************************ 00:44:39.516 END TEST bdev_write_zeroes 00:44:39.516 ************************************ 00:44:39.516 04:39:48 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:44:39.516 04:39:48 blockdev_crypto_qat -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:39.516 04:39:48 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:44:39.516 04:39:48 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:39.516 04:39:48 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:44:39.516 ************************************ 00:44:39.516 START TEST bdev_json_nonenclosed 00:44:39.516 ************************************ 00:44:39.516 04:39:48 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:39.516 [2024-07-23 04:39:48.278714] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:44:39.516 [2024-07-23 04:39:48.278830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2942614 ] 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:39.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.776 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:39.776 [2024-07-23 04:39:48.507444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:40.035 [2024-07-23 04:39:48.777389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:40.035 [2024-07-23 04:39:48.777477] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:44:40.035 [2024-07-23 04:39:48.777510] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:44:40.035 [2024-07-23 04:39:48.777526] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:40.604 00:44:40.604 real 0m1.166s 00:44:40.604 user 0m0.875s 00:44:40.604 sys 0m0.284s 00:44:40.604 04:39:49 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:44:40.604 04:39:49 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:40.604 04:39:49 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:44:40.604 ************************************ 00:44:40.604 END TEST bdev_json_nonenclosed 00:44:40.604 ************************************ 00:44:40.864 04:39:49 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 234 00:44:40.864 04:39:49 blockdev_crypto_qat -- bdev/blockdev.sh@781 -- # true 00:44:40.864 04:39:49 blockdev_crypto_qat -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:40.864 04:39:49 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:44:40.864 04:39:49 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:40.864 04:39:49 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:44:40.864 ************************************ 00:44:40.864 START TEST bdev_json_nonarray 00:44:40.864 ************************************ 00:44:40.864 04:39:49 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:40.864 [2024-07-23 04:39:49.536260] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:44:40.864 [2024-07-23 04:39:49.536378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2942897 ] 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:41.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:41.124 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:41.124 [2024-07-23 04:39:49.759905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:41.385 [2024-07-23 04:39:50.040290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:41.385 [2024-07-23 04:39:50.040387] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:44:41.385 [2024-07-23 04:39:50.040421] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:44:41.385 [2024-07-23 04:39:50.040437] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:41.989 00:44:41.989 real 0m1.204s 00:44:41.989 user 0m0.936s 00:44:41.989 sys 0m0.261s 00:44:41.989 04:39:50 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:44:41.989 04:39:50 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:41.989 04:39:50 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:44:41.989 ************************************ 00:44:41.989 END TEST bdev_json_nonarray 00:44:41.989 ************************************ 00:44:41.989 04:39:50 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 234 00:44:41.989 04:39:50 blockdev_crypto_qat -- bdev/blockdev.sh@784 -- # true 00:44:41.989 04:39:50 blockdev_crypto_qat -- bdev/blockdev.sh@786 -- # [[ crypto_qat == bdev ]] 00:44:41.989 04:39:50 blockdev_crypto_qat -- bdev/blockdev.sh@793 -- # [[ crypto_qat == gpt ]] 00:44:41.989 04:39:50 blockdev_crypto_qat -- bdev/blockdev.sh@797 -- # [[ crypto_qat == crypto_sw ]] 00:44:41.989 04:39:50 blockdev_crypto_qat -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:44:41.989 04:39:50 blockdev_crypto_qat -- bdev/blockdev.sh@810 -- # cleanup 00:44:41.989 04:39:50 blockdev_crypto_qat -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:44:41.989 04:39:50 blockdev_crypto_qat -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:44:41.989 04:39:50 blockdev_crypto_qat -- bdev/blockdev.sh@26 -- # [[ crypto_qat == rbd ]] 00:44:41.989 04:39:50 blockdev_crypto_qat -- bdev/blockdev.sh@30 -- # [[ crypto_qat == daos ]] 00:44:41.989 04:39:50 blockdev_crypto_qat -- bdev/blockdev.sh@34 -- # [[ crypto_qat = \g\p\t ]] 00:44:41.989 04:39:50 blockdev_crypto_qat -- bdev/blockdev.sh@40 -- # [[ crypto_qat == xnvme ]] 00:44:41.989 00:44:41.989 real 1m48.515s 00:44:41.989 user 3m43.777s 00:44:41.989 sys 0m11.040s 00:44:41.989 04:39:50 blockdev_crypto_qat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:41.989 04:39:50 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:44:41.989 ************************************ 00:44:41.989 END TEST blockdev_crypto_qat 00:44:41.989 ************************************ 00:44:41.989 04:39:50 -- common/autotest_common.sh@1142 -- # return 0 00:44:41.989 04:39:50 -- spdk/autotest.sh@360 -- # run_test chaining /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:44:41.989 04:39:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:41.989 04:39:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:41.989 04:39:50 -- common/autotest_common.sh@10 -- # set +x 00:44:41.989 ************************************ 00:44:41.989 START TEST chaining 00:44:41.989 ************************************ 00:44:41.989 04:39:50 chaining -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:44:42.250 * Looking for test storage... 00:44:42.250 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:44:42.250 04:39:50 chaining -- bdev/chaining.sh@14 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@7 -- # uname -s 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:44:42.250 04:39:50 chaining -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:42.250 04:39:50 chaining -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:42.250 04:39:50 chaining -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:42.250 04:39:50 chaining -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:42.250 04:39:50 chaining -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:42.250 04:39:50 chaining -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:42.250 04:39:50 chaining -- paths/export.sh@5 -- # export PATH 00:44:42.250 04:39:50 chaining -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@47 -- # : 0 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@51 -- # have_pci_nics=0 00:44:42.250 04:39:50 chaining -- bdev/chaining.sh@16 -- # nqn=nqn.2016-06.io.spdk:cnode0 00:44:42.250 04:39:50 chaining -- bdev/chaining.sh@17 -- # key0=(00112233445566778899001122334455 11223344556677889900112233445500) 00:44:42.250 04:39:50 chaining -- bdev/chaining.sh@18 -- # key1=(22334455667788990011223344550011 33445566778899001122334455001122) 00:44:42.250 04:39:50 chaining -- bdev/chaining.sh@19 -- # bperfsock=/var/tmp/bperf.sock 00:44:42.250 04:39:50 chaining -- bdev/chaining.sh@20 -- # declare -A stats 00:44:42.250 04:39:50 chaining -- bdev/chaining.sh@66 -- # nvmftestinit 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:42.250 04:39:50 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:42.250 04:39:50 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:44:42.250 04:39:50 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:44:42.250 04:39:50 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@296 -- # e810=() 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@297 -- # x722=() 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@298 -- # mlx=() 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.0 (0x8086 - 0x159b)' 00:44:50.369 Found 0000:20:00.0 (0x8086 - 0x159b) 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.1 (0x8086 - 0x159b)' 00:44:50.369 Found 0000:20:00.1 (0x8086 - 0x159b) 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.0: cvl_0_0' 00:44:50.369 Found net devices under 0000:20:00.0: cvl_0_0 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.1: cvl_0_1' 00:44:50.369 Found net devices under 0000:20:00.1: cvl_0_1 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:44:50.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:50.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:44:50.369 00:44:50.369 --- 10.0.0.2 ping statistics --- 00:44:50.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:50.369 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:50.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:50.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:44:50.369 00:44:50.369 --- 10.0.0.1 ping statistics --- 00:44:50.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:50.369 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:44:50.369 04:39:58 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:50.370 04:39:58 chaining -- nvmf/common.sh@422 -- # return 0 00:44:50.370 04:39:58 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:44:50.370 04:39:58 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:50.370 04:39:58 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:44:50.370 04:39:58 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:44:50.370 04:39:58 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:50.370 04:39:58 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:44:50.370 04:39:58 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:44:50.370 04:39:59 chaining -- bdev/chaining.sh@67 -- # nvmfappstart -m 0x2 00:44:50.370 04:39:59 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:44:50.370 04:39:59 chaining -- common/autotest_common.sh@722 -- # xtrace_disable 00:44:50.370 04:39:59 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:50.370 04:39:59 chaining -- nvmf/common.sh@481 -- # nvmfpid=2947119 00:44:50.370 04:39:59 chaining -- nvmf/common.sh@482 -- # waitforlisten 2947119 00:44:50.370 04:39:59 chaining -- common/autotest_common.sh@829 -- # '[' -z 2947119 ']' 00:44:50.370 04:39:59 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:50.370 04:39:59 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:44:50.370 04:39:59 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:50.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:50.370 04:39:59 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:44:50.370 04:39:59 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:50.370 04:39:59 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:44:50.370 [2024-07-23 04:39:59.132262] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:44:50.370 [2024-07-23 04:39:59.132396] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:50.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.646 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:50.646 [2024-07-23 04:39:59.355874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:50.905 [2024-07-23 04:39:59.624479] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:50.905 [2024-07-23 04:39:59.624529] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:50.905 [2024-07-23 04:39:59.624549] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:50.905 [2024-07-23 04:39:59.624565] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:50.905 [2024-07-23 04:39:59.624581] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:50.905 [2024-07-23 04:39:59.624632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:51.840 04:40:00 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:44:51.840 04:40:00 chaining -- common/autotest_common.sh@862 -- # return 0 00:44:51.840 04:40:00 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:44:51.840 04:40:00 chaining -- common/autotest_common.sh@728 -- # xtrace_disable 00:44:51.840 04:40:00 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:51.840 04:40:00 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@69 -- # mktemp 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@69 -- # input=/tmp/tmp.x3dZIdntMg 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@69 -- # mktemp 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@69 -- # output=/tmp/tmp.YtJQRde0AX 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@70 -- # trap 'tgtcleanup; exit 1' SIGINT SIGTERM EXIT 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@72 -- # rpc_cmd 00:44:51.840 04:40:00 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:51.840 04:40:00 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:51.840 malloc0 00:44:51.840 true 00:44:51.840 true 00:44:51.840 [2024-07-23 04:40:00.400580] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:44:51.840 crypto0 00:44:51.840 [2024-07-23 04:40:00.408595] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:44:51.840 crypto1 00:44:51.840 [2024-07-23 04:40:00.416754] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:51.840 [2024-07-23 04:40:00.432958] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:51.840 04:40:00 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@85 -- # update_stats 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@39 -- # opcode= 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:44:51.840 04:40:00 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:51.840 04:40:00 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:51.840 04:40:00 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=12 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:51.840 04:40:00 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:51.840 04:40:00 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:44:51.840 04:40:00 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]= 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:51.840 04:40:00 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:44:51.840 04:40:00 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:51.840 04:40:00 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:51.841 04:40:00 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:51.841 04:40:00 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:44:51.841 04:40:00 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:44:51.841 04:40:00 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:51.841 04:40:00 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:51.841 04:40:00 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:44:51.841 04:40:00 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:51.841 04:40:00 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:44:51.841 04:40:00 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:51.841 04:40:00 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:51.841 04:40:00 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:51.841 04:40:00 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:44:51.841 04:40:00 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:52.099 04:40:00 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:44:52.099 04:40:00 chaining -- bdev/chaining.sh@88 -- # dd if=/dev/urandom of=/tmp/tmp.x3dZIdntMg bs=1K count=64 00:44:52.099 64+0 records in 00:44:52.099 64+0 records out 00:44:52.099 65536 bytes (66 kB, 64 KiB) copied, 0.00105017 s, 62.4 MB/s 00:44:52.099 04:40:00 chaining -- bdev/chaining.sh@89 -- # spdk_dd --if /tmp/tmp.x3dZIdntMg --ob Nvme0n1 --bs 65536 --count 1 00:44:52.099 04:40:00 chaining -- bdev/chaining.sh@25 -- # local config 00:44:52.099 04:40:00 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:44:52.099 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:44:52.099 04:40:00 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:44:52.099 04:40:00 chaining -- bdev/chaining.sh@31 -- # config='{ 00:44:52.099 "subsystems": [ 00:44:52.099 { 00:44:52.099 "subsystem": "bdev", 00:44:52.099 "config": [ 00:44:52.100 { 00:44:52.100 "method": "bdev_nvme_attach_controller", 00:44:52.100 "params": { 00:44:52.100 "trtype": "tcp", 00:44:52.100 "adrfam": "IPv4", 00:44:52.100 "name": "Nvme0", 00:44:52.100 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:52.100 "traddr": "10.0.0.2", 00:44:52.100 "trsvcid": "4420" 00:44:52.100 } 00:44:52.100 }, 00:44:52.100 { 00:44:52.100 "method": "bdev_set_options", 00:44:52.100 "params": { 00:44:52.100 "bdev_auto_examine": false 00:44:52.100 } 00:44:52.100 } 00:44:52.100 ] 00:44:52.100 } 00:44:52.100 ] 00:44:52.100 }' 00:44:52.100 04:40:00 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.x3dZIdntMg --ob Nvme0n1 --bs 65536 --count 1 00:44:52.100 04:40:00 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:44:52.100 "subsystems": [ 00:44:52.100 { 00:44:52.100 "subsystem": "bdev", 00:44:52.100 "config": [ 00:44:52.100 { 00:44:52.100 "method": "bdev_nvme_attach_controller", 00:44:52.100 "params": { 00:44:52.100 "trtype": "tcp", 00:44:52.100 "adrfam": "IPv4", 00:44:52.100 "name": "Nvme0", 00:44:52.100 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:52.100 "traddr": "10.0.0.2", 00:44:52.100 "trsvcid": "4420" 00:44:52.100 } 00:44:52.100 }, 00:44:52.100 { 00:44:52.100 "method": "bdev_set_options", 00:44:52.100 "params": { 00:44:52.100 "bdev_auto_examine": false 00:44:52.100 } 00:44:52.100 } 00:44:52.100 ] 00:44:52.100 } 00:44:52.100 ] 00:44:52.100 }' 00:44:52.100 [2024-07-23 04:40:00.778338] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:44:52.100 [2024-07-23 04:40:00.778453] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2947432 ] 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:52.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.359 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:52.359 [2024-07-23 04:40:01.004170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:52.618 [2024-07-23 04:40:01.288273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:55.455  Copying: 64/64 [kB] (average 62 MBps) 00:44:55.455 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@90 -- # get_stat sequence_executed 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@39 -- # opcode= 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:44:55.455 04:40:03 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:55.455 04:40:03 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:55.455 04:40:03 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@90 -- # (( 13 == stats[sequence_executed] + 1 )) 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@91 -- # get_stat executed encrypt 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:44:55.455 04:40:03 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:55.455 04:40:03 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:55.455 04:40:03 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@91 -- # (( 2 == stats[encrypt_executed] + 2 )) 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@92 -- # get_stat executed decrypt 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:44:55.455 04:40:03 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:55.455 04:40:03 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:55.455 04:40:03 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@92 -- # (( 12 == stats[decrypt_executed] )) 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@95 -- # get_stat executed copy 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:55.455 04:40:03 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:55.455 04:40:03 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:44:55.455 04:40:03 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@95 -- # (( 4 == stats[copy_executed] )) 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@96 -- # update_stats 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@39 -- # opcode= 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:44:55.455 04:40:03 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:55.455 04:40:03 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:55.455 04:40:03 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=13 00:44:55.455 04:40:03 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:44:55.456 04:40:03 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:55.456 04:40:03 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:55.456 04:40:03 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:44:55.456 04:40:03 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:55.456 04:40:03 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:44:55.456 04:40:04 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:55.456 04:40:04 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:55.456 04:40:04 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=2 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:55.456 04:40:04 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:55.456 04:40:04 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:55.456 04:40:04 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:44:55.456 04:40:04 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:55.456 04:40:04 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:55.456 04:40:04 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@99 -- # spdk_dd --of /tmp/tmp.YtJQRde0AX --ib Nvme0n1 --bs 65536 --count 1 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@25 -- # local config 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:44:55.456 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@31 -- # config='{ 00:44:55.456 "subsystems": [ 00:44:55.456 { 00:44:55.456 "subsystem": "bdev", 00:44:55.456 "config": [ 00:44:55.456 { 00:44:55.456 "method": "bdev_nvme_attach_controller", 00:44:55.456 "params": { 00:44:55.456 "trtype": "tcp", 00:44:55.456 "adrfam": "IPv4", 00:44:55.456 "name": "Nvme0", 00:44:55.456 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:55.456 "traddr": "10.0.0.2", 00:44:55.456 "trsvcid": "4420" 00:44:55.456 } 00:44:55.456 }, 00:44:55.456 { 00:44:55.456 "method": "bdev_set_options", 00:44:55.456 "params": { 00:44:55.456 "bdev_auto_examine": false 00:44:55.456 } 00:44:55.456 } 00:44:55.456 ] 00:44:55.456 } 00:44:55.456 ] 00:44:55.456 }' 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.YtJQRde0AX --ib Nvme0n1 --bs 65536 --count 1 00:44:55.456 04:40:04 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:44:55.456 "subsystems": [ 00:44:55.456 { 00:44:55.456 "subsystem": "bdev", 00:44:55.456 "config": [ 00:44:55.456 { 00:44:55.456 "method": "bdev_nvme_attach_controller", 00:44:55.456 "params": { 00:44:55.456 "trtype": "tcp", 00:44:55.456 "adrfam": "IPv4", 00:44:55.456 "name": "Nvme0", 00:44:55.456 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:55.456 "traddr": "10.0.0.2", 00:44:55.456 "trsvcid": "4420" 00:44:55.456 } 00:44:55.456 }, 00:44:55.456 { 00:44:55.456 "method": "bdev_set_options", 00:44:55.456 "params": { 00:44:55.456 "bdev_auto_examine": false 00:44:55.456 } 00:44:55.456 } 00:44:55.456 ] 00:44:55.456 } 00:44:55.456 ] 00:44:55.456 }' 00:44:55.715 [2024-07-23 04:40:04.268827] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:44:55.715 [2024-07-23 04:40:04.268939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2948051 ] 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:55.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.715 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:55.715 [2024-07-23 04:40:04.490923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:55.974 [2024-07-23 04:40:04.756410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:58.812  Copying: 64/64 [kB] (average 31 MBps) 00:44:58.812 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@100 -- # get_stat sequence_executed 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@39 -- # opcode= 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:44:58.812 04:40:07 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:58.812 04:40:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:58.812 04:40:07 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@100 -- # (( 14 == stats[sequence_executed] + 1 )) 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@101 -- # get_stat executed encrypt 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:58.812 04:40:07 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:58.812 04:40:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:44:58.812 04:40:07 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@101 -- # (( 2 == stats[encrypt_executed] )) 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@102 -- # get_stat executed decrypt 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:58.812 04:40:07 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:58.812 04:40:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:44:58.812 04:40:07 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@102 -- # (( 14 == stats[decrypt_executed] + 2 )) 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@103 -- # get_stat executed copy 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:58.812 04:40:07 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:58.812 04:40:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:58.812 04:40:07 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@103 -- # (( 4 == stats[copy_executed] )) 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@104 -- # cmp /tmp/tmp.x3dZIdntMg /tmp/tmp.YtJQRde0AX 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@105 -- # spdk_dd --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@25 -- # local config 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:44:58.812 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@31 -- # config='{ 00:44:58.812 "subsystems": [ 00:44:58.812 { 00:44:58.812 "subsystem": "bdev", 00:44:58.812 "config": [ 00:44:58.812 { 00:44:58.812 "method": "bdev_nvme_attach_controller", 00:44:58.812 "params": { 00:44:58.812 "trtype": "tcp", 00:44:58.812 "adrfam": "IPv4", 00:44:58.812 "name": "Nvme0", 00:44:58.812 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:58.812 "traddr": "10.0.0.2", 00:44:58.812 "trsvcid": "4420" 00:44:58.812 } 00:44:58.812 }, 00:44:58.812 { 00:44:58.812 "method": "bdev_set_options", 00:44:58.812 "params": { 00:44:58.812 "bdev_auto_examine": false 00:44:58.812 } 00:44:58.812 } 00:44:58.812 ] 00:44:58.812 } 00:44:58.812 ] 00:44:58.812 }' 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:44:58.812 04:40:07 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:44:58.812 "subsystems": [ 00:44:58.812 { 00:44:58.812 "subsystem": "bdev", 00:44:58.812 "config": [ 00:44:58.812 { 00:44:58.812 "method": "bdev_nvme_attach_controller", 00:44:58.812 "params": { 00:44:58.812 "trtype": "tcp", 00:44:58.812 "adrfam": "IPv4", 00:44:58.813 "name": "Nvme0", 00:44:58.813 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:58.813 "traddr": "10.0.0.2", 00:44:58.813 "trsvcid": "4420" 00:44:58.813 } 00:44:58.813 }, 00:44:58.813 { 00:44:58.813 "method": "bdev_set_options", 00:44:58.813 "params": { 00:44:58.813 "bdev_auto_examine": false 00:44:58.813 } 00:44:58.813 } 00:44:58.813 ] 00:44:58.813 } 00:44:58.813 ] 00:44:58.813 }' 00:44:58.813 [2024-07-23 04:40:07.509625] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:44:58.813 [2024-07-23 04:40:07.509741] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2948560 ] 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:59.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.072 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:59.072 [2024-07-23 04:40:07.734275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:59.331 [2024-07-23 04:40:08.028232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:01.800  Copying: 64/64 [kB] (average 12 MBps) 00:45:01.800 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@106 -- # update_stats 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@39 -- # opcode= 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:45:01.800 04:40:10 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:01.800 04:40:10 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:01.800 04:40:10 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=15 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:01.800 04:40:10 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:01.800 04:40:10 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:45:01.800 04:40:10 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=4 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:01.800 04:40:10 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:01.800 04:40:10 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:45:01.800 04:40:10 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:45:01.800 04:40:10 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:01.801 04:40:10 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:45:01.801 04:40:10 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:01.801 04:40:10 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:45:01.801 04:40:10 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:01.801 04:40:10 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:01.801 04:40:10 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:01.801 04:40:10 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:45:01.801 04:40:10 chaining -- bdev/chaining.sh@109 -- # spdk_dd --if /tmp/tmp.x3dZIdntMg --ob Nvme0n1 --bs 4096 --count 16 00:45:01.801 04:40:10 chaining -- bdev/chaining.sh@25 -- # local config 00:45:01.801 04:40:10 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:45:01.801 04:40:10 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:45:01.801 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:45:01.801 04:40:10 chaining -- bdev/chaining.sh@31 -- # config='{ 00:45:01.801 "subsystems": [ 00:45:01.801 { 00:45:01.801 "subsystem": "bdev", 00:45:01.801 "config": [ 00:45:01.801 { 00:45:01.801 "method": "bdev_nvme_attach_controller", 00:45:01.801 "params": { 00:45:01.801 "trtype": "tcp", 00:45:01.801 "adrfam": "IPv4", 00:45:01.801 "name": "Nvme0", 00:45:01.801 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:01.801 "traddr": "10.0.0.2", 00:45:01.801 "trsvcid": "4420" 00:45:01.801 } 00:45:01.801 }, 00:45:01.801 { 00:45:01.801 "method": "bdev_set_options", 00:45:01.801 "params": { 00:45:01.801 "bdev_auto_examine": false 00:45:01.801 } 00:45:01.801 } 00:45:01.801 ] 00:45:01.801 } 00:45:01.801 ] 00:45:01.801 }' 00:45:01.801 04:40:10 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.x3dZIdntMg --ob Nvme0n1 --bs 4096 --count 16 00:45:01.801 04:40:10 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:45:01.801 "subsystems": [ 00:45:01.801 { 00:45:01.801 "subsystem": "bdev", 00:45:01.801 "config": [ 00:45:01.801 { 00:45:01.801 "method": "bdev_nvme_attach_controller", 00:45:01.801 "params": { 00:45:01.801 "trtype": "tcp", 00:45:01.801 "adrfam": "IPv4", 00:45:01.801 "name": "Nvme0", 00:45:01.801 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:01.801 "traddr": "10.0.0.2", 00:45:01.801 "trsvcid": "4420" 00:45:01.801 } 00:45:01.801 }, 00:45:01.801 { 00:45:01.801 "method": "bdev_set_options", 00:45:01.801 "params": { 00:45:01.801 "bdev_auto_examine": false 00:45:01.801 } 00:45:01.801 } 00:45:01.801 ] 00:45:01.801 } 00:45:01.801 ] 00:45:01.801 }' 00:45:02.060 [2024-07-23 04:40:10.660068] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:45:02.060 [2024-07-23 04:40:10.660192] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2949037 ] 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3d:01.0 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3d:01.1 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3d:01.2 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3d:01.3 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3d:01.4 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3d:01.5 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3d:01.6 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3d:01.7 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3d:02.0 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3d:02.1 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3d:02.2 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3d:02.3 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3d:02.4 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3d:02.5 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3d:02.6 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3d:02.7 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3f:01.0 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3f:01.1 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3f:01.2 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3f:01.3 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3f:01.4 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3f:01.5 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3f:01.6 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3f:01.7 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3f:02.0 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3f:02.1 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3f:02.2 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3f:02.3 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3f:02.4 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3f:02.5 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3f:02.6 cannot be used 00:45:02.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.060 EAL: Requested device 0000:3f:02.7 cannot be used 00:45:02.388 [2024-07-23 04:40:10.884900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:02.671 [2024-07-23 04:40:11.173738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:05.147  Copying: 64/64 [kB] (average 12 MBps) 00:45:05.147 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@110 -- # get_stat sequence_executed 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@39 -- # opcode= 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:45:05.147 04:40:13 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:05.147 04:40:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:05.147 04:40:13 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@110 -- # (( 31 == stats[sequence_executed] + 16 )) 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@111 -- # get_stat executed encrypt 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:45:05.147 04:40:13 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:05.147 04:40:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:05.147 04:40:13 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@111 -- # (( 36 == stats[encrypt_executed] + 32 )) 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@112 -- # get_stat executed decrypt 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:05.147 04:40:13 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:05.147 04:40:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:05.147 04:40:13 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@112 -- # (( 14 == stats[decrypt_executed] )) 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@113 -- # get_stat executed copy 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:05.147 04:40:13 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:05.147 04:40:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:45:05.147 04:40:13 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@113 -- # (( 4 == stats[copy_executed] )) 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@114 -- # update_stats 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@39 -- # opcode= 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:45:05.147 04:40:13 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:05.147 04:40:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:45:05.147 04:40:13 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=31 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:45:05.147 04:40:13 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:45:05.148 04:40:13 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:05.148 04:40:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:05.148 04:40:13 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=36 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:05.148 04:40:13 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:05.148 04:40:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:45:05.148 04:40:13 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:45:05.148 04:40:13 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:05.148 04:40:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:05.148 04:40:13 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@117 -- # : 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@118 -- # spdk_dd --of /tmp/tmp.YtJQRde0AX --ib Nvme0n1 --bs 4096 --count 16 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@25 -- # local config 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:45:05.148 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:45:05.148 04:40:13 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:45:05.408 04:40:13 chaining -- bdev/chaining.sh@31 -- # config='{ 00:45:05.408 "subsystems": [ 00:45:05.408 { 00:45:05.408 "subsystem": "bdev", 00:45:05.408 "config": [ 00:45:05.408 { 00:45:05.408 "method": "bdev_nvme_attach_controller", 00:45:05.408 "params": { 00:45:05.408 "trtype": "tcp", 00:45:05.408 "adrfam": "IPv4", 00:45:05.408 "name": "Nvme0", 00:45:05.408 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:05.408 "traddr": "10.0.0.2", 00:45:05.408 "trsvcid": "4420" 00:45:05.408 } 00:45:05.408 }, 00:45:05.408 { 00:45:05.408 "method": "bdev_set_options", 00:45:05.408 "params": { 00:45:05.408 "bdev_auto_examine": false 00:45:05.408 } 00:45:05.408 } 00:45:05.408 ] 00:45:05.408 } 00:45:05.408 ] 00:45:05.408 }' 00:45:05.408 04:40:13 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.YtJQRde0AX --ib Nvme0n1 --bs 4096 --count 16 00:45:05.408 04:40:13 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:45:05.408 "subsystems": [ 00:45:05.408 { 00:45:05.408 "subsystem": "bdev", 00:45:05.408 "config": [ 00:45:05.408 { 00:45:05.408 "method": "bdev_nvme_attach_controller", 00:45:05.408 "params": { 00:45:05.408 "trtype": "tcp", 00:45:05.408 "adrfam": "IPv4", 00:45:05.408 "name": "Nvme0", 00:45:05.408 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:05.408 "traddr": "10.0.0.2", 00:45:05.408 "trsvcid": "4420" 00:45:05.408 } 00:45:05.408 }, 00:45:05.408 { 00:45:05.408 "method": "bdev_set_options", 00:45:05.408 "params": { 00:45:05.408 "bdev_auto_examine": false 00:45:05.408 } 00:45:05.408 } 00:45:05.408 ] 00:45:05.408 } 00:45:05.408 ] 00:45:05.408 }' 00:45:05.408 [2024-07-23 04:40:14.050215] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:45:05.408 [2024-07-23 04:40:14.050336] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2949621 ] 00:45:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.408 EAL: Requested device 0000:3d:01.0 cannot be used 00:45:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.408 EAL: Requested device 0000:3d:01.1 cannot be used 00:45:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.408 EAL: Requested device 0000:3d:01.2 cannot be used 00:45:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.408 EAL: Requested device 0000:3d:01.3 cannot be used 00:45:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.408 EAL: Requested device 0000:3d:01.4 cannot be used 00:45:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.408 EAL: Requested device 0000:3d:01.5 cannot be used 00:45:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.408 EAL: Requested device 0000:3d:01.6 cannot be used 00:45:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.408 EAL: Requested device 0000:3d:01.7 cannot be used 00:45:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.408 EAL: Requested device 0000:3d:02.0 cannot be used 00:45:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.408 EAL: Requested device 0000:3d:02.1 cannot be used 00:45:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.408 EAL: Requested device 0000:3d:02.2 cannot be used 00:45:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.408 EAL: Requested device 0000:3d:02.3 cannot be used 00:45:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.408 EAL: Requested device 0000:3d:02.4 cannot be used 00:45:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.408 EAL: Requested device 0000:3d:02.5 cannot be used 00:45:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.408 EAL: Requested device 0000:3d:02.6 cannot be used 00:45:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.408 EAL: Requested device 0000:3d:02.7 cannot be used 00:45:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.408 EAL: Requested device 0000:3f:01.0 cannot be used 00:45:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.408 EAL: Requested device 0000:3f:01.1 cannot be used 00:45:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.408 EAL: Requested device 0000:3f:01.2 cannot be used 00:45:05.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.408 EAL: Requested device 0000:3f:01.3 cannot be used 00:45:05.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.668 EAL: Requested device 0000:3f:01.4 cannot be used 00:45:05.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.668 EAL: Requested device 0000:3f:01.5 cannot be used 00:45:05.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.668 EAL: Requested device 0000:3f:01.6 cannot be used 00:45:05.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.668 EAL: Requested device 0000:3f:01.7 cannot be used 00:45:05.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.668 EAL: Requested device 0000:3f:02.0 cannot be used 00:45:05.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.668 EAL: Requested device 0000:3f:02.1 cannot be used 00:45:05.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.668 EAL: Requested device 0000:3f:02.2 cannot be used 00:45:05.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.668 EAL: Requested device 0000:3f:02.3 cannot be used 00:45:05.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.668 EAL: Requested device 0000:3f:02.4 cannot be used 00:45:05.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.668 EAL: Requested device 0000:3f:02.5 cannot be used 00:45:05.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.668 EAL: Requested device 0000:3f:02.6 cannot be used 00:45:05.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.668 EAL: Requested device 0000:3f:02.7 cannot be used 00:45:05.668 [2024-07-23 04:40:14.276272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:05.927 [2024-07-23 04:40:14.559868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:08.774  Copying: 64/64 [kB] (average 503 kBps) 00:45:08.774 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@119 -- # get_stat sequence_executed 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@39 -- # opcode= 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:45:08.774 04:40:17 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:08.774 04:40:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:08.774 04:40:17 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@119 -- # (( 47 == stats[sequence_executed] + 16 )) 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@120 -- # get_stat executed encrypt 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:45:08.774 04:40:17 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:08.774 04:40:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:08.774 04:40:17 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@120 -- # (( 36 == stats[encrypt_executed] )) 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@121 -- # get_stat executed decrypt 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:45:08.774 04:40:17 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:08.774 04:40:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:08.774 04:40:17 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@121 -- # (( 46 == stats[decrypt_executed] + 32 )) 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@122 -- # get_stat executed copy 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:45:08.774 04:40:17 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:08.774 04:40:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:08.774 04:40:17 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@122 -- # (( 4 == stats[copy_executed] )) 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@123 -- # cmp /tmp/tmp.x3dZIdntMg /tmp/tmp.YtJQRde0AX 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@126 -- # tgtcleanup 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@58 -- # rm -f /tmp/tmp.x3dZIdntMg /tmp/tmp.YtJQRde0AX 00:45:08.774 04:40:17 chaining -- bdev/chaining.sh@59 -- # nvmftestfini 00:45:08.774 04:40:17 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:45:08.774 04:40:17 chaining -- nvmf/common.sh@117 -- # sync 00:45:08.774 04:40:17 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:45:08.774 04:40:17 chaining -- nvmf/common.sh@120 -- # set +e 00:45:08.774 04:40:17 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:45:08.774 04:40:17 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:45:08.774 rmmod nvme_tcp 00:45:08.774 rmmod nvme_fabrics 00:45:08.774 rmmod nvme_keyring 00:45:08.774 04:40:17 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:45:08.774 04:40:17 chaining -- nvmf/common.sh@124 -- # set -e 00:45:08.774 04:40:17 chaining -- nvmf/common.sh@125 -- # return 0 00:45:08.774 04:40:17 chaining -- nvmf/common.sh@489 -- # '[' -n 2947119 ']' 00:45:08.774 04:40:17 chaining -- nvmf/common.sh@490 -- # killprocess 2947119 00:45:08.774 04:40:17 chaining -- common/autotest_common.sh@948 -- # '[' -z 2947119 ']' 00:45:08.774 04:40:17 chaining -- common/autotest_common.sh@952 -- # kill -0 2947119 00:45:08.774 04:40:17 chaining -- common/autotest_common.sh@953 -- # uname 00:45:08.774 04:40:17 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:45:08.774 04:40:17 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2947119 00:45:09.034 04:40:17 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:45:09.034 04:40:17 chaining -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:45:09.034 04:40:17 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2947119' 00:45:09.034 killing process with pid 2947119 00:45:09.034 04:40:17 chaining -- common/autotest_common.sh@967 -- # kill 2947119 00:45:09.034 04:40:17 chaining -- common/autotest_common.sh@972 -- # wait 2947119 00:45:10.939 04:40:19 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:45:10.939 04:40:19 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:45:10.939 04:40:19 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:45:10.939 04:40:19 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:45:10.939 04:40:19 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:45:10.939 04:40:19 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:10.940 04:40:19 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:10.940 04:40:19 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:12.846 04:40:21 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:45:12.846 04:40:21 chaining -- bdev/chaining.sh@129 -- # trap 'bperfcleanup; exit 1' SIGINT SIGTERM EXIT 00:45:12.846 04:40:21 chaining -- bdev/chaining.sh@132 -- # bperfpid=2950837 00:45:12.846 04:40:21 chaining -- bdev/chaining.sh@134 -- # waitforlisten 2950837 00:45:12.846 04:40:21 chaining -- common/autotest_common.sh@829 -- # '[' -z 2950837 ']' 00:45:12.846 04:40:21 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:12.846 04:40:21 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:45:12.846 04:40:21 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:12.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:12.846 04:40:21 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:45:12.846 04:40:21 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:12.846 04:40:21 chaining -- bdev/chaining.sh@131 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:45:12.846 [2024-07-23 04:40:21.570566] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:45:12.846 [2024-07-23 04:40:21.570696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2950837 ] 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3d:01.0 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3d:01.1 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3d:01.2 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3d:01.3 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3d:01.4 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3d:01.5 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3d:01.6 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3d:01.7 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3d:02.0 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3d:02.1 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3d:02.2 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3d:02.3 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3d:02.4 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3d:02.5 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3d:02.6 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3d:02.7 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3f:01.0 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3f:01.1 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3f:01.2 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3f:01.3 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3f:01.4 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3f:01.5 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3f:01.6 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3f:01.7 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3f:02.0 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.105 EAL: Requested device 0000:3f:02.1 cannot be used 00:45:13.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.106 EAL: Requested device 0000:3f:02.2 cannot be used 00:45:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.106 EAL: Requested device 0000:3f:02.3 cannot be used 00:45:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.106 EAL: Requested device 0000:3f:02.4 cannot be used 00:45:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.106 EAL: Requested device 0000:3f:02.5 cannot be used 00:45:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.106 EAL: Requested device 0000:3f:02.6 cannot be used 00:45:13.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.106 EAL: Requested device 0000:3f:02.7 cannot be used 00:45:13.106 [2024-07-23 04:40:21.799527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:13.365 [2024-07-23 04:40:22.088940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:13.933 04:40:22 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:45:13.933 04:40:22 chaining -- common/autotest_common.sh@862 -- # return 0 00:45:13.933 04:40:22 chaining -- bdev/chaining.sh@135 -- # rpc_cmd 00:45:13.933 04:40:22 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:13.933 04:40:22 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:14.501 malloc0 00:45:14.501 true 00:45:14.501 true 00:45:14.501 [2024-07-23 04:40:23.012552] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:45:14.501 crypto0 00:45:14.501 [2024-07-23 04:40:23.020593] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:45:14.501 crypto1 00:45:14.501 04:40:23 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:14.501 04:40:23 chaining -- bdev/chaining.sh@145 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:45:14.501 Running I/O for 5 seconds... 00:45:19.766 00:45:19.766 Latency(us) 00:45:19.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:19.766 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:45:19.766 Verification LBA range: start 0x0 length 0x2000 00:45:19.766 crypto1 : 5.01 11420.40 44.61 0.00 0.00 22346.39 5138.02 14470.35 00:45:19.766 =================================================================================================================== 00:45:19.766 Total : 11420.40 44.61 0.00 0.00 22346.39 5138.02 14470.35 00:45:19.766 0 00:45:19.766 04:40:28 chaining -- bdev/chaining.sh@146 -- # killprocess 2950837 00:45:19.766 04:40:28 chaining -- common/autotest_common.sh@948 -- # '[' -z 2950837 ']' 00:45:19.766 04:40:28 chaining -- common/autotest_common.sh@952 -- # kill -0 2950837 00:45:19.766 04:40:28 chaining -- common/autotest_common.sh@953 -- # uname 00:45:19.766 04:40:28 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:45:19.766 04:40:28 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2950837 00:45:19.766 04:40:28 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:45:19.766 04:40:28 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:45:19.766 04:40:28 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2950837' 00:45:19.766 killing process with pid 2950837 00:45:19.766 04:40:28 chaining -- common/autotest_common.sh@967 -- # kill 2950837 00:45:19.766 Received shutdown signal, test time was about 5.000000 seconds 00:45:19.766 00:45:19.766 Latency(us) 00:45:19.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:19.766 =================================================================================================================== 00:45:19.766 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:19.766 04:40:28 chaining -- common/autotest_common.sh@972 -- # wait 2950837 00:45:21.703 04:40:30 chaining -- bdev/chaining.sh@152 -- # bperfpid=2952156 00:45:21.703 04:40:30 chaining -- bdev/chaining.sh@151 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:45:21.703 04:40:30 chaining -- bdev/chaining.sh@154 -- # waitforlisten 2952156 00:45:21.703 04:40:30 chaining -- common/autotest_common.sh@829 -- # '[' -z 2952156 ']' 00:45:21.703 04:40:30 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:21.703 04:40:30 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:45:21.703 04:40:30 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:21.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:21.703 04:40:30 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:45:21.703 04:40:30 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:21.703 [2024-07-23 04:40:30.091308] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:45:21.703 [2024-07-23 04:40:30.091426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2952156 ] 00:45:21.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.703 EAL: Requested device 0000:3d:01.0 cannot be used 00:45:21.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.703 EAL: Requested device 0000:3d:01.1 cannot be used 00:45:21.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.703 EAL: Requested device 0000:3d:01.2 cannot be used 00:45:21.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.703 EAL: Requested device 0000:3d:01.3 cannot be used 00:45:21.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.703 EAL: Requested device 0000:3d:01.4 cannot be used 00:45:21.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.703 EAL: Requested device 0000:3d:01.5 cannot be used 00:45:21.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.703 EAL: Requested device 0000:3d:01.6 cannot be used 00:45:21.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.703 EAL: Requested device 0000:3d:01.7 cannot be used 00:45:21.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.703 EAL: Requested device 0000:3d:02.0 cannot be used 00:45:21.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.703 EAL: Requested device 0000:3d:02.1 cannot be used 00:45:21.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.703 EAL: Requested device 0000:3d:02.2 cannot be used 00:45:21.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.703 EAL: Requested device 0000:3d:02.3 cannot be used 00:45:21.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.703 EAL: Requested device 0000:3d:02.4 cannot be used 00:45:21.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.703 EAL: Requested device 0000:3d:02.5 cannot be used 00:45:21.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.703 EAL: Requested device 0000:3d:02.6 cannot be used 00:45:21.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.704 EAL: Requested device 0000:3d:02.7 cannot be used 00:45:21.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.704 EAL: Requested device 0000:3f:01.0 cannot be used 00:45:21.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.704 EAL: Requested device 0000:3f:01.1 cannot be used 00:45:21.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.704 EAL: Requested device 0000:3f:01.2 cannot be used 00:45:21.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.704 EAL: Requested device 0000:3f:01.3 cannot be used 00:45:21.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.704 EAL: Requested device 0000:3f:01.4 cannot be used 00:45:21.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.704 EAL: Requested device 0000:3f:01.5 cannot be used 00:45:21.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.704 EAL: Requested device 0000:3f:01.6 cannot be used 00:45:21.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.704 EAL: Requested device 0000:3f:01.7 cannot be used 00:45:21.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.704 EAL: Requested device 0000:3f:02.0 cannot be used 00:45:21.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.704 EAL: Requested device 0000:3f:02.1 cannot be used 00:45:21.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.704 EAL: Requested device 0000:3f:02.2 cannot be used 00:45:21.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.704 EAL: Requested device 0000:3f:02.3 cannot be used 00:45:21.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.704 EAL: Requested device 0000:3f:02.4 cannot be used 00:45:21.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.704 EAL: Requested device 0000:3f:02.5 cannot be used 00:45:21.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.704 EAL: Requested device 0000:3f:02.6 cannot be used 00:45:21.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.704 EAL: Requested device 0000:3f:02.7 cannot be used 00:45:21.704 [2024-07-23 04:40:30.293311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:21.978 [2024-07-23 04:40:30.588214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:22.237 04:40:30 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:45:22.237 04:40:30 chaining -- common/autotest_common.sh@862 -- # return 0 00:45:22.237 04:40:30 chaining -- bdev/chaining.sh@155 -- # rpc_cmd 00:45:22.237 04:40:30 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:22.237 04:40:30 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:22.804 malloc0 00:45:22.804 true 00:45:22.804 true 00:45:22.804 [2024-07-23 04:40:31.558145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:45:22.804 [2024-07-23 04:40:31.558210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:22.804 [2024-07-23 04:40:31.558238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:45:22.804 [2024-07-23 04:40:31.558257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:22.804 [2024-07-23 04:40:31.559833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:22.804 [2024-07-23 04:40:31.559872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:45:22.804 pt0 00:45:22.804 [2024-07-23 04:40:31.566190] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:45:22.804 crypto0 00:45:22.804 [2024-07-23 04:40:31.574192] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:45:22.804 crypto1 00:45:22.804 04:40:31 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:22.804 04:40:31 chaining -- bdev/chaining.sh@166 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:45:23.063 Running I/O for 5 seconds... 00:45:28.330 00:45:28.330 Latency(us) 00:45:28.330 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:28.330 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:45:28.330 Verification LBA range: start 0x0 length 0x2000 00:45:28.330 crypto1 : 5.01 8791.68 34.34 0.00 0.00 29029.32 1743.26 17616.08 00:45:28.330 =================================================================================================================== 00:45:28.330 Total : 8791.68 34.34 0.00 0.00 29029.32 1743.26 17616.08 00:45:28.330 0 00:45:28.330 04:40:36 chaining -- bdev/chaining.sh@167 -- # killprocess 2952156 00:45:28.330 04:40:36 chaining -- common/autotest_common.sh@948 -- # '[' -z 2952156 ']' 00:45:28.330 04:40:36 chaining -- common/autotest_common.sh@952 -- # kill -0 2952156 00:45:28.330 04:40:36 chaining -- common/autotest_common.sh@953 -- # uname 00:45:28.330 04:40:36 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:45:28.330 04:40:36 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2952156 00:45:28.330 04:40:36 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:45:28.330 04:40:36 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:45:28.330 04:40:36 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2952156' 00:45:28.330 killing process with pid 2952156 00:45:28.330 04:40:36 chaining -- common/autotest_common.sh@967 -- # kill 2952156 00:45:28.330 Received shutdown signal, test time was about 5.000000 seconds 00:45:28.330 00:45:28.330 Latency(us) 00:45:28.330 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:28.330 =================================================================================================================== 00:45:28.330 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:28.330 04:40:36 chaining -- common/autotest_common.sh@972 -- # wait 2952156 00:45:30.233 04:40:38 chaining -- bdev/chaining.sh@169 -- # trap - SIGINT SIGTERM EXIT 00:45:30.233 04:40:38 chaining -- bdev/chaining.sh@170 -- # killprocess 2952156 00:45:30.233 04:40:38 chaining -- common/autotest_common.sh@948 -- # '[' -z 2952156 ']' 00:45:30.233 04:40:38 chaining -- common/autotest_common.sh@952 -- # kill -0 2952156 00:45:30.233 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2952156) - No such process 00:45:30.233 04:40:38 chaining -- common/autotest_common.sh@975 -- # echo 'Process with pid 2952156 is not found' 00:45:30.233 Process with pid 2952156 is not found 00:45:30.233 04:40:38 chaining -- bdev/chaining.sh@171 -- # wait 2952156 00:45:30.233 04:40:38 chaining -- bdev/chaining.sh@175 -- # nvmftestinit 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:30.233 04:40:38 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:30.233 04:40:38 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:45:30.233 04:40:38 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@296 -- # e810=() 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@297 -- # x722=() 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@298 -- # mlx=() 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.0 (0x8086 - 0x159b)' 00:45:30.233 Found 0000:20:00.0 (0x8086 - 0x159b) 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.1 (0x8086 - 0x159b)' 00:45:30.233 Found 0000:20:00.1 (0x8086 - 0x159b) 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.0: cvl_0_0' 00:45:30.233 Found net devices under 0000:20:00.0: cvl_0_0 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.1: cvl_0_1' 00:45:30.233 Found net devices under 0000:20:00.1: cvl_0_1 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:45:30.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:30.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:45:30.233 00:45:30.233 --- 10.0.0.2 ping statistics --- 00:45:30.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:30.233 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:30.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:30.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:45:30.233 00:45:30.233 --- 10.0.0.1 ping statistics --- 00:45:30.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:30.233 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@422 -- # return 0 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:45:30.233 04:40:38 chaining -- bdev/chaining.sh@176 -- # nvmfappstart -m 0x2 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:45:30.233 04:40:38 chaining -- common/autotest_common.sh@722 -- # xtrace_disable 00:45:30.233 04:40:38 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@481 -- # nvmfpid=2953722 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@482 -- # waitforlisten 2953722 00:45:30.233 04:40:38 chaining -- common/autotest_common.sh@829 -- # '[' -z 2953722 ']' 00:45:30.233 04:40:38 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:30.233 04:40:38 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:45:30.233 04:40:38 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:30.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:30.233 04:40:38 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:45:30.233 04:40:38 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:30.233 04:40:38 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:45:30.492 [2024-07-23 04:40:39.090156] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:45:30.492 [2024-07-23 04:40:39.090275] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:30.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.492 EAL: Requested device 0000:3d:01.0 cannot be used 00:45:30.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.492 EAL: Requested device 0000:3d:01.1 cannot be used 00:45:30.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.492 EAL: Requested device 0000:3d:01.2 cannot be used 00:45:30.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.492 EAL: Requested device 0000:3d:01.3 cannot be used 00:45:30.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.492 EAL: Requested device 0000:3d:01.4 cannot be used 00:45:30.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.492 EAL: Requested device 0000:3d:01.5 cannot be used 00:45:30.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.492 EAL: Requested device 0000:3d:01.6 cannot be used 00:45:30.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.492 EAL: Requested device 0000:3d:01.7 cannot be used 00:45:30.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.492 EAL: Requested device 0000:3d:02.0 cannot be used 00:45:30.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.492 EAL: Requested device 0000:3d:02.1 cannot be used 00:45:30.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.492 EAL: Requested device 0000:3d:02.2 cannot be used 00:45:30.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.492 EAL: Requested device 0000:3d:02.3 cannot be used 00:45:30.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.492 EAL: Requested device 0000:3d:02.4 cannot be used 00:45:30.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.492 EAL: Requested device 0000:3d:02.5 cannot be used 00:45:30.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.492 EAL: Requested device 0000:3d:02.6 cannot be used 00:45:30.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.492 EAL: Requested device 0000:3d:02.7 cannot be used 00:45:30.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.492 EAL: Requested device 0000:3f:01.0 cannot be used 00:45:30.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.492 EAL: Requested device 0000:3f:01.1 cannot be used 00:45:30.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.493 EAL: Requested device 0000:3f:01.2 cannot be used 00:45:30.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.493 EAL: Requested device 0000:3f:01.3 cannot be used 00:45:30.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.493 EAL: Requested device 0000:3f:01.4 cannot be used 00:45:30.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.493 EAL: Requested device 0000:3f:01.5 cannot be used 00:45:30.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.493 EAL: Requested device 0000:3f:01.6 cannot be used 00:45:30.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.493 EAL: Requested device 0000:3f:01.7 cannot be used 00:45:30.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.493 EAL: Requested device 0000:3f:02.0 cannot be used 00:45:30.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.493 EAL: Requested device 0000:3f:02.1 cannot be used 00:45:30.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.493 EAL: Requested device 0000:3f:02.2 cannot be used 00:45:30.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.493 EAL: Requested device 0000:3f:02.3 cannot be used 00:45:30.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.493 EAL: Requested device 0000:3f:02.4 cannot be used 00:45:30.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.493 EAL: Requested device 0000:3f:02.5 cannot be used 00:45:30.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.493 EAL: Requested device 0000:3f:02.6 cannot be used 00:45:30.493 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.493 EAL: Requested device 0000:3f:02.7 cannot be used 00:45:30.751 [2024-07-23 04:40:39.314396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:31.009 [2024-07-23 04:40:39.590540] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:31.009 [2024-07-23 04:40:39.590587] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:31.009 [2024-07-23 04:40:39.590607] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:31.009 [2024-07-23 04:40:39.590623] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:31.009 [2024-07-23 04:40:39.590639] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:31.009 [2024-07-23 04:40:39.590679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:45:31.576 04:40:40 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:45:31.576 04:40:40 chaining -- common/autotest_common.sh@862 -- # return 0 00:45:31.576 04:40:40 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:45:31.576 04:40:40 chaining -- common/autotest_common.sh@728 -- # xtrace_disable 00:45:31.576 04:40:40 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:31.576 04:40:40 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:31.576 04:40:40 chaining -- bdev/chaining.sh@178 -- # rpc_cmd 00:45:31.576 04:40:40 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:31.576 04:40:40 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:31.576 malloc0 00:45:31.576 [2024-07-23 04:40:40.208780] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:31.576 [2024-07-23 04:40:40.224987] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:31.576 04:40:40 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:31.576 04:40:40 chaining -- bdev/chaining.sh@186 -- # trap 'bperfcleanup || :; nvmftestfini || :; exit 1' SIGINT SIGTERM EXIT 00:45:31.576 04:40:40 chaining -- bdev/chaining.sh@189 -- # bperfpid=2953839 00:45:31.577 04:40:40 chaining -- bdev/chaining.sh@191 -- # waitforlisten 2953839 /var/tmp/bperf.sock 00:45:31.577 04:40:40 chaining -- common/autotest_common.sh@829 -- # '[' -z 2953839 ']' 00:45:31.577 04:40:40 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:31.577 04:40:40 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:45:31.577 04:40:40 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:31.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:31.577 04:40:40 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:45:31.577 04:40:40 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:31.577 04:40:40 chaining -- bdev/chaining.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:45:31.577 [2024-07-23 04:40:40.341499] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:45:31.577 [2024-07-23 04:40:40.341623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2953839 ] 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3d:01.0 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3d:01.1 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3d:01.2 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3d:01.3 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3d:01.4 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3d:01.5 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3d:01.6 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3d:01.7 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3d:02.0 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3d:02.1 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3d:02.2 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3d:02.3 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3d:02.4 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3d:02.5 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3d:02.6 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3d:02.7 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3f:01.0 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3f:01.1 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3f:01.2 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3f:01.3 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3f:01.4 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3f:01.5 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3f:01.6 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3f:01.7 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3f:02.0 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3f:02.1 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3f:02.2 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3f:02.3 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3f:02.4 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3f:02.5 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3f:02.6 cannot be used 00:45:31.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.836 EAL: Requested device 0000:3f:02.7 cannot be used 00:45:31.836 [2024-07-23 04:40:40.568265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:32.094 [2024-07-23 04:40:40.839269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:33.029 04:40:41 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:45:33.030 04:40:41 chaining -- common/autotest_common.sh@862 -- # return 0 00:45:33.030 04:40:41 chaining -- bdev/chaining.sh@192 -- # rpc_bperf 00:45:33.030 04:40:41 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:45:33.596 [2024-07-23 04:40:42.271555] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:45:33.596 nvme0n1 00:45:33.596 true 00:45:33.596 crypto0 00:45:33.596 04:40:42 chaining -- bdev/chaining.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:33.854 Running I/O for 5 seconds... 00:45:39.118 00:45:39.118 Latency(us) 00:45:39.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:39.118 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:45:39.118 Verification LBA range: start 0x0 length 0x2000 00:45:39.118 crypto0 : 5.02 8263.62 32.28 0.00 0.00 30872.87 3827.30 25165.82 00:45:39.118 =================================================================================================================== 00:45:39.118 Total : 8263.62 32.28 0.00 0.00 30872.87 3827.30 25165.82 00:45:39.118 0 00:45:39.118 04:40:47 chaining -- bdev/chaining.sh@205 -- # get_stat_bperf sequence_executed 00:45:39.118 04:40:47 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:45:39.118 04:40:47 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:39.118 04:40:47 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:45:39.118 04:40:47 chaining -- bdev/chaining.sh@39 -- # opcode= 00:45:39.118 04:40:47 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:39.118 04:40:47 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:45:39.118 04:40:47 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:45:39.118 04:40:47 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:45:39.118 04:40:47 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:39.119 04:40:47 chaining -- bdev/chaining.sh@205 -- # sequence=83004 00:45:39.119 04:40:47 chaining -- bdev/chaining.sh@206 -- # get_stat_bperf executed encrypt 00:45:39.119 04:40:47 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:45:39.119 04:40:47 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:39.119 04:40:47 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:39.119 04:40:47 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:45:39.119 04:40:47 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:39.119 04:40:47 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:45:39.119 04:40:47 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:45:39.119 04:40:47 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:45:39.119 04:40:47 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:39.377 04:40:48 chaining -- bdev/chaining.sh@206 -- # encrypt=41502 00:45:39.377 04:40:48 chaining -- bdev/chaining.sh@207 -- # get_stat_bperf executed decrypt 00:45:39.377 04:40:48 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:45:39.377 04:40:48 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:39.377 04:40:48 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:39.377 04:40:48 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:45:39.377 04:40:48 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:39.377 04:40:48 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:45:39.377 04:40:48 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:45:39.377 04:40:48 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:39.377 04:40:48 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:45:39.636 04:40:48 chaining -- bdev/chaining.sh@207 -- # decrypt=41502 00:45:39.636 04:40:48 chaining -- bdev/chaining.sh@208 -- # get_stat_bperf executed crc32c 00:45:39.636 04:40:48 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:45:39.636 04:40:48 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:39.636 04:40:48 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:39.636 04:40:48 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:45:39.636 04:40:48 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:39.636 04:40:48 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:45:39.636 04:40:48 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:45:39.636 04:40:48 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:39.636 04:40:48 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:45:39.900 04:40:48 chaining -- bdev/chaining.sh@208 -- # crc32c=83004 00:45:39.901 04:40:48 chaining -- bdev/chaining.sh@210 -- # (( sequence > 0 )) 00:45:39.901 04:40:48 chaining -- bdev/chaining.sh@211 -- # (( encrypt + decrypt == sequence )) 00:45:39.901 04:40:48 chaining -- bdev/chaining.sh@212 -- # (( encrypt + decrypt == crc32c )) 00:45:39.901 04:40:48 chaining -- bdev/chaining.sh@214 -- # killprocess 2953839 00:45:39.901 04:40:48 chaining -- common/autotest_common.sh@948 -- # '[' -z 2953839 ']' 00:45:39.901 04:40:48 chaining -- common/autotest_common.sh@952 -- # kill -0 2953839 00:45:39.901 04:40:48 chaining -- common/autotest_common.sh@953 -- # uname 00:45:39.901 04:40:48 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:45:39.901 04:40:48 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2953839 00:45:39.901 04:40:48 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:45:39.901 04:40:48 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:45:39.901 04:40:48 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2953839' 00:45:39.901 killing process with pid 2953839 00:45:39.901 04:40:48 chaining -- common/autotest_common.sh@967 -- # kill 2953839 00:45:39.901 Received shutdown signal, test time was about 5.000000 seconds 00:45:39.901 00:45:39.901 Latency(us) 00:45:39.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:39.901 =================================================================================================================== 00:45:39.901 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:39.901 04:40:48 chaining -- common/autotest_common.sh@972 -- # wait 2953839 00:45:41.830 04:40:50 chaining -- bdev/chaining.sh@219 -- # bperfpid=2955463 00:45:41.830 04:40:50 chaining -- bdev/chaining.sh@221 -- # waitforlisten 2955463 /var/tmp/bperf.sock 00:45:41.830 04:40:50 chaining -- common/autotest_common.sh@829 -- # '[' -z 2955463 ']' 00:45:41.830 04:40:50 chaining -- bdev/chaining.sh@217 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 65536 -q 32 --wait-for-rpc -z 00:45:41.830 04:40:50 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:41.830 04:40:50 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:45:41.830 04:40:50 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:41.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:41.830 04:40:50 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:45:41.830 04:40:50 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:41.830 [2024-07-23 04:40:50.418785] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:45:41.830 [2024-07-23 04:40:50.418914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2955463 ] 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3d:01.0 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3d:01.1 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3d:01.2 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3d:01.3 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3d:01.4 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3d:01.5 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3d:01.6 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3d:01.7 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3d:02.0 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3d:02.1 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3d:02.2 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3d:02.3 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3d:02.4 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3d:02.5 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3d:02.6 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3d:02.7 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3f:01.0 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3f:01.1 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3f:01.2 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3f:01.3 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3f:01.4 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3f:01.5 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3f:01.6 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3f:01.7 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3f:02.0 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3f:02.1 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3f:02.2 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3f:02.3 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3f:02.4 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3f:02.5 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3f:02.6 cannot be used 00:45:41.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.830 EAL: Requested device 0000:3f:02.7 cannot be used 00:45:42.099 [2024-07-23 04:40:50.646968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:42.357 [2024-07-23 04:40:50.918509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:42.924 04:40:51 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:45:42.924 04:40:51 chaining -- common/autotest_common.sh@862 -- # return 0 00:45:42.924 04:40:51 chaining -- bdev/chaining.sh@222 -- # rpc_bperf 00:45:42.924 04:40:51 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:45:43.861 [2024-07-23 04:40:52.620353] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:45:43.861 nvme0n1 00:45:43.861 true 00:45:43.861 crypto0 00:45:44.119 04:40:52 chaining -- bdev/chaining.sh@231 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:44.119 Running I/O for 5 seconds... 00:45:49.386 00:45:49.386 Latency(us) 00:45:49.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:49.386 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:45:49.386 Verification LBA range: start 0x0 length 0x200 00:45:49.386 crypto0 : 5.01 1669.74 104.36 0.00 0.00 18786.24 1310.72 20656.95 00:45:49.386 =================================================================================================================== 00:45:49.386 Total : 1669.74 104.36 0.00 0.00 18786.24 1310.72 20656.95 00:45:49.386 0 00:45:49.387 04:40:57 chaining -- bdev/chaining.sh@233 -- # get_stat_bperf sequence_executed 00:45:49.387 04:40:57 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:45:49.387 04:40:57 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:49.387 04:40:57 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:45:49.387 04:40:57 chaining -- bdev/chaining.sh@39 -- # opcode= 00:45:49.387 04:40:57 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:49.387 04:40:57 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:45:49.387 04:40:57 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:45:49.387 04:40:57 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:45:49.387 04:40:57 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:49.387 04:40:58 chaining -- bdev/chaining.sh@233 -- # sequence=16718 00:45:49.387 04:40:58 chaining -- bdev/chaining.sh@234 -- # get_stat_bperf executed encrypt 00:45:49.387 04:40:58 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:45:49.387 04:40:58 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:49.387 04:40:58 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:49.387 04:40:58 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:45:49.387 04:40:58 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:49.387 04:40:58 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:45:49.387 04:40:58 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:45:49.387 04:40:58 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:49.387 04:40:58 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:45:49.645 04:40:58 chaining -- bdev/chaining.sh@234 -- # encrypt=8359 00:45:49.645 04:40:58 chaining -- bdev/chaining.sh@235 -- # get_stat_bperf executed decrypt 00:45:49.645 04:40:58 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:45:49.645 04:40:58 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:49.645 04:40:58 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:49.645 04:40:58 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:45:49.645 04:40:58 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:49.645 04:40:58 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:45:49.645 04:40:58 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:45:49.645 04:40:58 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:49.645 04:40:58 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:45:49.903 04:40:58 chaining -- bdev/chaining.sh@235 -- # decrypt=8359 00:45:49.903 04:40:58 chaining -- bdev/chaining.sh@236 -- # get_stat_bperf executed crc32c 00:45:49.903 04:40:58 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:45:49.903 04:40:58 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:49.903 04:40:58 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:49.903 04:40:58 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:45:49.903 04:40:58 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:49.903 04:40:58 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:45:49.903 04:40:58 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:45:49.903 04:40:58 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:45:49.903 04:40:58 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:50.160 04:40:58 chaining -- bdev/chaining.sh@236 -- # crc32c=16718 00:45:50.160 04:40:58 chaining -- bdev/chaining.sh@238 -- # (( sequence > 0 )) 00:45:50.160 04:40:58 chaining -- bdev/chaining.sh@239 -- # (( encrypt + decrypt == sequence )) 00:45:50.160 04:40:58 chaining -- bdev/chaining.sh@240 -- # (( encrypt + decrypt == crc32c )) 00:45:50.160 04:40:58 chaining -- bdev/chaining.sh@242 -- # killprocess 2955463 00:45:50.160 04:40:58 chaining -- common/autotest_common.sh@948 -- # '[' -z 2955463 ']' 00:45:50.161 04:40:58 chaining -- common/autotest_common.sh@952 -- # kill -0 2955463 00:45:50.161 04:40:58 chaining -- common/autotest_common.sh@953 -- # uname 00:45:50.161 04:40:58 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:45:50.161 04:40:58 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2955463 00:45:50.161 04:40:58 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:45:50.161 04:40:58 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:45:50.161 04:40:58 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2955463' 00:45:50.161 killing process with pid 2955463 00:45:50.161 04:40:58 chaining -- common/autotest_common.sh@967 -- # kill 2955463 00:45:50.161 Received shutdown signal, test time was about 5.000000 seconds 00:45:50.161 00:45:50.161 Latency(us) 00:45:50.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:50.161 =================================================================================================================== 00:45:50.161 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:50.161 04:40:58 chaining -- common/autotest_common.sh@972 -- # wait 2955463 00:45:52.064 04:41:00 chaining -- bdev/chaining.sh@243 -- # nvmftestfini 00:45:52.064 04:41:00 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:45:52.064 04:41:00 chaining -- nvmf/common.sh@117 -- # sync 00:45:52.064 04:41:00 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:45:52.064 04:41:00 chaining -- nvmf/common.sh@120 -- # set +e 00:45:52.064 04:41:00 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:45:52.064 04:41:00 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:45:52.064 rmmod nvme_tcp 00:45:52.064 rmmod nvme_fabrics 00:45:52.064 rmmod nvme_keyring 00:45:52.064 04:41:00 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:45:52.064 04:41:00 chaining -- nvmf/common.sh@124 -- # set -e 00:45:52.064 04:41:00 chaining -- nvmf/common.sh@125 -- # return 0 00:45:52.064 04:41:00 chaining -- nvmf/common.sh@489 -- # '[' -n 2953722 ']' 00:45:52.064 04:41:00 chaining -- nvmf/common.sh@490 -- # killprocess 2953722 00:45:52.064 04:41:00 chaining -- common/autotest_common.sh@948 -- # '[' -z 2953722 ']' 00:45:52.064 04:41:00 chaining -- common/autotest_common.sh@952 -- # kill -0 2953722 00:45:52.064 04:41:00 chaining -- common/autotest_common.sh@953 -- # uname 00:45:52.064 04:41:00 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:45:52.064 04:41:00 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2953722 00:45:52.064 04:41:00 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:45:52.064 04:41:00 chaining -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:45:52.064 04:41:00 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2953722' 00:45:52.064 killing process with pid 2953722 00:45:52.064 04:41:00 chaining -- common/autotest_common.sh@967 -- # kill 2953722 00:45:52.064 04:41:00 chaining -- common/autotest_common.sh@972 -- # wait 2953722 00:45:53.966 04:41:02 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:45:53.966 04:41:02 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:45:53.966 04:41:02 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:45:53.966 04:41:02 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:45:53.966 04:41:02 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:45:53.966 04:41:02 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:53.966 04:41:02 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:53.966 04:41:02 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:55.867 04:41:04 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:45:55.867 04:41:04 chaining -- bdev/chaining.sh@245 -- # trap - SIGINT SIGTERM EXIT 00:45:55.867 00:45:55.867 real 1m13.830s 00:45:55.867 user 1m40.202s 00:45:55.867 sys 0m14.275s 00:45:55.867 04:41:04 chaining -- common/autotest_common.sh@1124 -- # xtrace_disable 00:45:55.867 04:41:04 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:55.867 ************************************ 00:45:55.867 END TEST chaining 00:45:55.867 ************************************ 00:45:55.867 04:41:04 -- common/autotest_common.sh@1142 -- # return 0 00:45:55.867 04:41:04 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:45:55.867 04:41:04 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:45:55.867 04:41:04 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:45:55.867 04:41:04 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:45:55.867 04:41:04 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:45:55.867 04:41:04 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:45:55.867 04:41:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:45:55.867 04:41:04 -- common/autotest_common.sh@10 -- # set +x 00:45:55.867 04:41:04 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:45:55.867 04:41:04 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:45:55.867 04:41:04 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:45:55.867 04:41:04 -- common/autotest_common.sh@10 -- # set +x 00:46:02.431 INFO: APP EXITING 00:46:02.431 INFO: killing all VMs 00:46:02.431 INFO: killing vhost app 00:46:02.431 INFO: EXIT DONE 00:46:06.672 Waiting for block devices as requested 00:46:06.672 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:46:06.672 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:46:06.672 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:46:06.672 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:46:06.672 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:46:06.672 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:46:06.672 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:46:06.930 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:46:06.930 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:46:06.930 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:46:06.930 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:46:07.188 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:46:07.188 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:46:07.188 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:46:07.445 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:46:07.445 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:46:07.705 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:46:11.891 Cleaning 00:46:11.891 Removing: /var/run/dpdk/spdk0/config 00:46:11.891 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:46:11.891 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:46:11.891 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:46:11.891 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:46:11.891 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:46:11.891 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:46:11.891 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:46:11.891 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:46:11.891 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:46:11.891 Removing: /var/run/dpdk/spdk0/hugepage_info 00:46:11.891 Removing: /dev/shm/nvmf_trace.0 00:46:11.891 Removing: /dev/shm/spdk_tgt_trace.pid2533639 00:46:11.891 Removing: /var/run/dpdk/spdk0 00:46:11.891 Removing: /var/run/dpdk/spdk_pid2526032 00:46:11.891 Removing: /var/run/dpdk/spdk_pid2530327 00:46:11.891 Removing: /var/run/dpdk/spdk_pid2533639 00:46:11.891 Removing: /var/run/dpdk/spdk_pid2534883 00:46:11.891 Removing: /var/run/dpdk/spdk_pid2536473 00:46:11.891 Removing: /var/run/dpdk/spdk_pid2537288 00:46:11.891 Removing: /var/run/dpdk/spdk_pid2538920 00:46:11.891 Removing: /var/run/dpdk/spdk_pid2539189 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2540114 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2544475 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2547955 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2548626 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2549724 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2550663 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2551707 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2552059 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2552536 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2552858 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2553954 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2557657 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2558173 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2558516 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2559584 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2559895 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2560461 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2561007 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2561558 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2562131 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2562793 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2563411 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2563993 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2564545 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2565092 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2565646 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2566198 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2566759 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2567420 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2568080 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2568626 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2569181 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2569729 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2570299 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2571018 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2571628 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2572171 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2573237 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2573799 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2574421 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2575144 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2575743 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2576507 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2577076 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2577987 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2578968 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2580015 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2581184 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2582641 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2583456 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2589603 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2592400 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2595336 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2597052 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2599420 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2600503 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2600687 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2600833 00:46:12.151 Removing: /var/run/dpdk/spdk_pid2606190 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2607019 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2608867 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2609441 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2618502 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2620706 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2622125 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2627988 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2630298 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2631720 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2637923 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2641087 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2642508 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2655686 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2658610 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2660547 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2673463 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2676392 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2677865 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2691014 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2695438 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2696871 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2711212 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2714980 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2716836 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2731534 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2734773 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2736455 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2751325 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2756227 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2757934 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2759608 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2763671 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2770224 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2773769 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2780286 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2784890 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2792403 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2796331 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2804917 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2807919 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2816531 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2819681 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2827810 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2830808 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2836253 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2837116 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2838095 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2838905 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2840025 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2841159 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2842491 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2843239 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2846439 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2849099 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2851753 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2853896 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2862743 00:46:12.410 Removing: /var/run/dpdk/spdk_pid2868337 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2870816 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2873364 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2876306 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2878322 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2886975 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2892554 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2894138 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2895215 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2899091 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2901898 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2905464 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2907339 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2909450 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2910773 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2911054 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2911319 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2912191 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2912735 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2914761 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2917096 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2919486 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2920841 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2922171 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2922787 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2923005 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2923285 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2924667 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2926254 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2927324 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2930985 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2933917 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2937325 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2939246 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2941299 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2942614 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2942897 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2947432 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2948051 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2948560 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2949037 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2949621 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2950837 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2952156 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2953839 00:46:12.667 Removing: /var/run/dpdk/spdk_pid2955463 00:46:12.667 Clean 00:46:12.924 04:41:21 -- common/autotest_common.sh@1451 -- # return 0 00:46:12.924 04:41:21 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:46:12.924 04:41:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:46:12.924 04:41:21 -- common/autotest_common.sh@10 -- # set +x 00:46:12.924 04:41:21 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:46:12.924 04:41:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:46:12.924 04:41:21 -- common/autotest_common.sh@10 -- # set +x 00:46:12.924 04:41:21 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:46:12.924 04:41:21 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log ]] 00:46:12.924 04:41:21 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log 00:46:12.924 04:41:21 -- spdk/autotest.sh@391 -- # hash lcov 00:46:12.924 04:41:21 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:46:12.924 04:41:21 -- spdk/autotest.sh@393 -- # hostname 00:46:12.924 04:41:21 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/crypto-phy-autotest/spdk -t spdk-wfp-19 -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info 00:46:13.181 geninfo: WARNING: invalid characters removed from testname! 00:46:39.784 04:41:47 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:46:47.900 04:41:56 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:46:54.466 04:42:02 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:46:56.366 04:42:04 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:47:02.926 04:42:10 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:47:08.192 04:42:16 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:47:10.725 04:42:19 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:47:10.725 04:42:19 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:47:10.725 04:42:19 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:47:10.725 04:42:19 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:10.725 04:42:19 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:10.725 04:42:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:10.725 04:42:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:10.725 04:42:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:10.725 04:42:19 -- paths/export.sh@5 -- $ export PATH 00:47:10.725 04:42:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:10.725 04:42:19 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:47:10.725 04:42:19 -- common/autobuild_common.sh@447 -- $ date +%s 00:47:10.725 04:42:19 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721702539.XXXXXX 00:47:10.725 04:42:19 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721702539.k0rDBX 00:47:10.725 04:42:19 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:47:10.725 04:42:19 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:47:10.725 04:42:19 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:47:10.725 04:42:19 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:47:10.725 04:42:19 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:47:10.725 04:42:19 -- common/autobuild_common.sh@463 -- $ get_config_params 00:47:10.725 04:42:19 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:47:10.725 04:42:19 -- common/autotest_common.sh@10 -- $ set +x 00:47:10.725 04:42:19 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:47:10.725 04:42:19 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:47:10.725 04:42:19 -- pm/common@17 -- $ local monitor 00:47:10.725 04:42:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:10.725 04:42:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:10.725 04:42:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:10.725 04:42:19 -- pm/common@21 -- $ date +%s 00:47:10.725 04:42:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:10.725 04:42:19 -- pm/common@21 -- $ date +%s 00:47:10.725 04:42:19 -- pm/common@25 -- $ sleep 1 00:47:10.725 04:42:19 -- pm/common@21 -- $ date +%s 00:47:10.725 04:42:19 -- pm/common@21 -- $ date +%s 00:47:10.725 04:42:19 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721702539 00:47:10.725 04:42:19 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721702539 00:47:10.725 04:42:19 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721702539 00:47:10.725 04:42:19 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721702539 00:47:10.725 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721702539_collect-vmstat.pm.log 00:47:10.725 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721702539_collect-cpu-load.pm.log 00:47:10.725 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721702539_collect-cpu-temp.pm.log 00:47:10.725 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721702539_collect-bmc-pm.bmc.pm.log 00:47:11.664 04:42:20 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:47:11.664 04:42:20 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:47:11.664 04:42:20 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:47:11.664 04:42:20 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:47:11.664 04:42:20 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:47:11.664 04:42:20 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:47:11.664 04:42:20 -- spdk/autopackage.sh@19 -- $ timing_finish 00:47:11.664 04:42:20 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:47:11.664 04:42:20 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:47:11.664 04:42:20 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:47:11.664 04:42:20 -- spdk/autopackage.sh@20 -- $ exit 0 00:47:11.664 04:42:20 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:47:11.664 04:42:20 -- pm/common@29 -- $ signal_monitor_resources TERM 00:47:11.664 04:42:20 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:47:11.664 04:42:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:11.664 04:42:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:47:11.664 04:42:20 -- pm/common@44 -- $ pid=2970825 00:47:11.664 04:42:20 -- pm/common@50 -- $ kill -TERM 2970825 00:47:11.664 04:42:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:11.664 04:42:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:47:11.664 04:42:20 -- pm/common@44 -- $ pid=2970827 00:47:11.664 04:42:20 -- pm/common@50 -- $ kill -TERM 2970827 00:47:11.664 04:42:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:11.664 04:42:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:47:11.664 04:42:20 -- pm/common@44 -- $ pid=2970829 00:47:11.664 04:42:20 -- pm/common@50 -- $ kill -TERM 2970829 00:47:11.664 04:42:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:11.664 04:42:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:47:11.664 04:42:20 -- pm/common@44 -- $ pid=2970851 00:47:11.664 04:42:20 -- pm/common@50 -- $ sudo -E kill -TERM 2970851 00:47:11.664 + [[ -n 2396136 ]] 00:47:11.664 + sudo kill 2396136 00:47:11.674 [Pipeline] } 00:47:11.692 [Pipeline] // stage 00:47:11.698 [Pipeline] } 00:47:11.715 [Pipeline] // timeout 00:47:11.719 [Pipeline] } 00:47:11.736 [Pipeline] // catchError 00:47:11.741 [Pipeline] } 00:47:11.758 [Pipeline] // wrap 00:47:11.765 [Pipeline] } 00:47:11.780 [Pipeline] // catchError 00:47:11.790 [Pipeline] stage 00:47:11.792 [Pipeline] { (Epilogue) 00:47:11.806 [Pipeline] catchError 00:47:11.808 [Pipeline] { 00:47:11.823 [Pipeline] echo 00:47:11.824 Cleanup processes 00:47:11.830 [Pipeline] sh 00:47:12.114 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:47:12.114 2970930 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/sdr.cache 00:47:12.114 2971271 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:47:12.127 [Pipeline] sh 00:47:12.468 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:47:12.468 ++ grep -v 'sudo pgrep' 00:47:12.468 ++ awk '{print $1}' 00:47:12.468 + sudo kill -9 2970930 00:47:12.480 [Pipeline] sh 00:47:12.762 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:47:12.763 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:47:22.736 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:47:28.017 [Pipeline] sh 00:47:28.299 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:47:28.299 Artifacts sizes are good 00:47:28.313 [Pipeline] archiveArtifacts 00:47:28.320 Archiving artifacts 00:47:28.468 [Pipeline] sh 00:47:28.752 + sudo chown -R sys_sgci /var/jenkins/workspace/crypto-phy-autotest 00:47:28.766 [Pipeline] cleanWs 00:47:28.776 [WS-CLEANUP] Deleting project workspace... 00:47:28.776 [WS-CLEANUP] Deferred wipeout is used... 00:47:28.783 [WS-CLEANUP] done 00:47:28.785 [Pipeline] } 00:47:28.805 [Pipeline] // catchError 00:47:28.817 [Pipeline] sh 00:47:29.099 + logger -p user.info -t JENKINS-CI 00:47:29.109 [Pipeline] } 00:47:29.125 [Pipeline] // stage 00:47:29.131 [Pipeline] } 00:47:29.148 [Pipeline] // node 00:47:29.154 [Pipeline] End of Pipeline 00:47:29.192 Finished: SUCCESS